Update website
diff --git a/index.html b/index.html
index bd0759d..c386ed7 100644
--- a/index.html
+++ b/index.html
@@ -304,6 +304,13 @@
 </li>
       
         <li class="md-nav__item">
+  <a href="#docker-images" class="md-nav__link">
+    Docker images
+  </a>
+  
+</li>
+      
+        <li class="md-nav__item">
   <a href="#source-code" class="md-nav__link">
     Source code
   </a>
@@ -848,6 +855,13 @@
 </li>
       
         <li class="md-nav__item">
+  <a href="#docker-images" class="md-nav__link">
+    Docker images
+  </a>
+  
+</li>
+      
+        <li class="md-nav__item">
   <a href="#source-code" class="md-nav__link">
     Source code
   </a>
@@ -1029,6 +1043,9 @@
 </tr>
 </tbody>
 </table>
+<h2 id="docker-images">Docker images<a class="headerlink" href="#docker-images" title="Permanent link">&para;</a></h2>
+<p>Docker images for the Hawk Server are available from the <a href="https://gitlab.com/hawklabs/hawk-docker/">hawk-docker</a> project at Hawk Labs.
+These Docker images are rebuilt at least once a week, or whenever there are new changes in Hawk.</p>
 <h2 id="source-code">Source code<a class="headerlink" href="#source-code" title="Permanent link">&para;</a></h2>
 <p>To access the source code, clone the Git repository for Hawk with your preferred client from:</p>
 <div class="codehilite"><pre><span></span><code>https://gitlab.eclipse.org/eclipse/hawk/hawk.git
diff --git a/search/search_index.json b/search/search_index.json
index 1801990..e373489 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Eclipse Hawk \u00b6 Eclipse Hawk is a model indexing solution that can take models written with various technologies and turn them into graph databases, for easier and faster querying. Hawk is licensed under the Eclipse Public License 2.0 , with the GNU GPL 3.0 as secondary license. Any questions? Check the other sections on the left for how to get started and use Hawk. If you cannot find an answer there, feel free to ask at the official forum in Eclipse.org . Eclipse update sites \u00b6 The core components of Hawk, the OrientDB / Greycat backends, and the Thrift API clients can be installed from one of these Eclipse update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/updates/ Interim https://download.eclipse.org/hawk/2.2.0/updates/ If you are developing a custom Hawk server, you will find the Hawk server components in these update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/server/ Interim https://download.eclipse.org/hawk/2.2.0/server/ Plain libraries \u00b6 Many of the Eclipse Hawk components are available via Maven Central under the org.eclipse.hawk group ID: Site Repository Group ID Version Stable Maven Central org.eclipse.hawk 2.1.0 Interim OSSRH org.eclipse.hawk 2.2.0-SNAPSHOT Thrift API libraries \u00b6 There are Apache Thrift client libraries targeting C++, Java, JavaScript, and Python for talking with a Hawk server over its Thrift API. The Java libraries are available as Maven artefacts (see above). The C++ and JavaScript libraries can be downloaded from the links below. C++ libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-cpp-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-cpp-2.2.0.tar.gz JavaScript libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-js-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-js-2.2.0.tar.gz Python libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-py-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-py-2.2.0.tar.gz Firewall-friendly artefacts \u00b6 For environments with corporate firewalls, the zipped update sites, zipped source code, and prebuilt CLI/server products for Linux, MacOS and Windows are available from the download folders: Folder Location Stable https://download.eclipse.org/hawk/2.1.0/ Interim https://download.eclipse.org/hawk/2.2.0/ Source code \u00b6 To access the source code, clone the Git repository for Hawk with your preferred client from: https://gitlab.eclipse.org/eclipse/hawk/hawk.git Committers will use a different URL: git@gitlab.eclipse.org:eclipse/hawk/hawk.git You can also read the code through your browser from the Eclipse Gitlab instance (which allows for archive downloads). Older versions \u00b6 Downloads for older versions are archived at Eclipse.org: 2.0.0","title":"Home"},{"location":"#eclipse-hawk","text":"Eclipse Hawk is a model indexing solution that can take models written with various technologies and turn them into graph databases, for easier and faster querying. Hawk is licensed under the Eclipse Public License 2.0 , with the GNU GPL 3.0 as secondary license. Any questions? Check the other sections on the left for how to get started and use Hawk. If you cannot find an answer there, feel free to ask at the official forum in Eclipse.org .","title":"Eclipse Hawk"},{"location":"#eclipse-update-sites","text":"The core components of Hawk, the OrientDB / Greycat backends, and the Thrift API clients can be installed from one of these Eclipse update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/updates/ Interim https://download.eclipse.org/hawk/2.2.0/updates/ If you are developing a custom Hawk server, you will find the Hawk server components in these update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/server/ Interim https://download.eclipse.org/hawk/2.2.0/server/","title":"Eclipse update sites"},{"location":"#plain-libraries","text":"Many of the Eclipse Hawk components are available via Maven Central under the org.eclipse.hawk group ID: Site Repository Group ID Version Stable Maven Central org.eclipse.hawk 2.1.0 Interim OSSRH org.eclipse.hawk 2.2.0-SNAPSHOT","title":"Plain libraries"},{"location":"#thrift-api-libraries","text":"There are Apache Thrift client libraries targeting C++, Java, JavaScript, and Python for talking with a Hawk server over its Thrift API. The Java libraries are available as Maven artefacts (see above). The C++ and JavaScript libraries can be downloaded from the links below.","title":"Thrift API libraries"},{"location":"#c-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-cpp-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-cpp-2.2.0.tar.gz","title":"C++ libraries"},{"location":"#javascript-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-js-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-js-2.2.0.tar.gz","title":"JavaScript libraries"},{"location":"#python-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-py-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-py-2.2.0.tar.gz","title":"Python libraries"},{"location":"#firewall-friendly-artefacts","text":"For environments with corporate firewalls, the zipped update sites, zipped source code, and prebuilt CLI/server products for Linux, MacOS and Windows are available from the download folders: Folder Location Stable https://download.eclipse.org/hawk/2.1.0/ Interim https://download.eclipse.org/hawk/2.2.0/","title":"Firewall-friendly artefacts"},{"location":"#source-code","text":"To access the source code, clone the Git repository for Hawk with your preferred client from: https://gitlab.eclipse.org/eclipse/hawk/hawk.git Committers will use a different URL: git@gitlab.eclipse.org:eclipse/hawk/hawk.git You can also read the code through your browser from the Eclipse Gitlab instance (which allows for archive downloads).","title":"Source code"},{"location":"#older-versions","text":"Downloads for older versions are archived at Eclipse.org: 2.0.0","title":"Older versions"},{"location":"additional-resources/","text":"Screencasts \u00b6 We have several screencasts that show how to use Hawk and work on its code: Running of basic operations Use of advanced features Download and configuration of Hawk onto a fresh Eclipse Luna (Modeling Tools) distribution How to use Hawk to add Modelio metamodel(s) Use Hawk Server to auto-configure & start Hawk Instances Papers \u00b6 Hawk has been at the core of a long series of papers. These are listed in chronological order (from oldest to newest): Hawk: towards a scalable model indexing architecture A Framework to Benchmark NoSQL Data Stores for Large-Scale Model Persistence Towards Scalable Querying of Large-Scale Models Evaluation of Contemporary Graph Databases for Efficient Persistence of Large-Scale Models Towards Incremental Updates in Large-Scale Model Indexes Towards Scalable Model Indexing (PhD Thesis) Stress-testing remote model querying APIs for relational and graph-based stores Integration of a graph-based model indexer in commercial modelling tools Integration of Hawk for Model Metrics into the MEASURE Platform Hawk solutions to the TTC 2018 Social Media Case Scaling-up domain-specific modelling languages through modularity services Querying and Annotating Model Histories with Time-Aware Patterns Scalable modeling technologies in the wild: an experience report on wind turbines control applications development Book chapter: Monitoring model analytics over large repositories with Hawk and MEASURE Temporal Models for History-Aware Explainability Slides \u00b6 Hawk: indexado de modelos en bases de datos NoSQL - 90 minute slides in Spanish about the MONDO project and Hawk MODELS18 tutorial on NeoEMF and Hawk Related tools \u00b6 The HawkQuery SMM MEASURE library allows using Hawk servers as metric providers for the MEASURE platform.","title":"Additional resources"},{"location":"additional-resources/#screencasts","text":"We have several screencasts that show how to use Hawk and work on its code: Running of basic operations Use of advanced features Download and configuration of Hawk onto a fresh Eclipse Luna (Modeling Tools) distribution How to use Hawk to add Modelio metamodel(s) Use Hawk Server to auto-configure & start Hawk Instances","title":"Screencasts"},{"location":"additional-resources/#papers","text":"Hawk has been at the core of a long series of papers. These are listed in chronological order (from oldest to newest): Hawk: towards a scalable model indexing architecture A Framework to Benchmark NoSQL Data Stores for Large-Scale Model Persistence Towards Scalable Querying of Large-Scale Models Evaluation of Contemporary Graph Databases for Efficient Persistence of Large-Scale Models Towards Incremental Updates in Large-Scale Model Indexes Towards Scalable Model Indexing (PhD Thesis) Stress-testing remote model querying APIs for relational and graph-based stores Integration of a graph-based model indexer in commercial modelling tools Integration of Hawk for Model Metrics into the MEASURE Platform Hawk solutions to the TTC 2018 Social Media Case Scaling-up domain-specific modelling languages through modularity services Querying and Annotating Model Histories with Time-Aware Patterns Scalable modeling technologies in the wild: an experience report on wind turbines control applications development Book chapter: Monitoring model analytics over large repositories with Hawk and MEASURE Temporal Models for History-Aware Explainability","title":"Papers"},{"location":"additional-resources/#slides","text":"Hawk: indexado de modelos en bases de datos NoSQL - 90 minute slides in Spanish about the MONDO project and Hawk MODELS18 tutorial on NeoEMF and Hawk","title":"Slides"},{"location":"additional-resources/#related-tools","text":"The HawkQuery SMM MEASURE library allows using Hawk servers as metric providers for the MEASURE platform.","title":"Related tools"},{"location":"advanced-use/advanced-props/","text":"When querying through EOL, we can access several extra properties on any model element: eAllContents : collection with all the model elements directly or indirectly contained within this one. eContainer : returns the model element that contains this one, or null if it does not have a container. eContents : collection with all the model elements directly contained within this one. hawkFile : string with the repository paths of the files that this model element belongs to, separated by \";\". hawkFiles : collection with the repository paths of all the files that this model element belongs to. hawkIn : collection with all the model elements accessible through incoming references. hawkInEdges : collection with all the incoming references (see their attributes below). hawkOut : collection with all the model elements accessible through outgoing references. hawkOutEdges : collection with all the outgoing references (see their attributes below). hawkProxies : collection with all the proxy reference lists (see their properties below). hawkRepo : string with the URLs of the repositories that this model element belongs to, separated by \";\". hawkRepos : collection with all the repositories that this model element belongs to. hawkURIFragment : URI fragment of the model element within its file. There is also the isContainedWithin(repo, path) method for checking if an element is directly or indirectly contained within a certain file. References \u00b6 References are wrapped into entities of their own, with the following attributes: edge : raw edge, without wrapping. type / name : name of the reference. source / startNode : source of the reference. target / endNode : target of the reference. Proxy reference lists \u00b6 A proxy reference list represents all the unresolved links from a node to the elements in a certain file. These links may be unresolved as the file may be missing, or the specific elements may not be in the file. Proxy reference lists have the following fields: sourceNodeID : unique ID of the source model element node for these proxy references. targetFile : returns an object which refers to the target file. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. references : returns a collection with each of the proxy references to the missing file. Each reference has the following fields: edgeLabel : name of the proxy reference. isContainment : true if and only if the proxy reference is a containment reference (the target is contained within the source). isContainer : true if and only if the proxy reference is a container reference (the source is contained within the target). target : object which refers to the target of the proxy reference. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. fragment : string with the fragment that identifies the model element within the file. isFragmentBased : true if and only if the proxy reference is purely fragment-based (file path is irrelevant). This can be the case for some modelling technologies (e.g. Modelio).","title":"Advanced properties"},{"location":"advanced-use/advanced-props/#references","text":"References are wrapped into entities of their own, with the following attributes: edge : raw edge, without wrapping. type / name : name of the reference. source / startNode : source of the reference. target / endNode : target of the reference.","title":"References"},{"location":"advanced-use/advanced-props/#proxy-reference-lists","text":"A proxy reference list represents all the unresolved links from a node to the elements in a certain file. These links may be unresolved as the file may be missing, or the specific elements may not be in the file. Proxy reference lists have the following fields: sourceNodeID : unique ID of the source model element node for these proxy references. targetFile : returns an object which refers to the target file. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. references : returns a collection with each of the proxy references to the missing file. Each reference has the following fields: edgeLabel : name of the proxy reference. isContainment : true if and only if the proxy reference is a containment reference (the target is contained within the source). isContainer : true if and only if the proxy reference is a container reference (the source is contained within the target). target : object which refers to the target of the proxy reference. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. fragment : string with the fragment that identifies the model element within the file. isFragmentBased : true if and only if the proxy reference is purely fragment-based (file path is irrelevant). This can be the case for some modelling technologies (e.g. Modelio).","title":"Proxy reference lists"},{"location":"advanced-use/graph-as-emf/","text":"In addition to regular querying, it is possible to use a Hawk graph as a model itself. To do so, use the \"File > New > Other > Hawk > Local Hawk Model Descriptor\" wizard and select the Hawk instance you want to access as a model. Once the wizard is finished, open the .localhawkmodel file to browse through it as an EMF model. You will need to ensure that the EPackages of the indexed models are part of your EMF package registry: normally Hawk should ensure this happens. For a Hawk index containing the GraBaTs 2009 set0.xmi file, it will look like this: The actual editor is a customized version of the Epsilon Exeed editor, which is based on the standard EMF reflective tree-based editor. The contents of the graph are navigated lazily, so we can open huge models very quickly and navigate through them. The editor also provides additional \"Custom\" actions when we right click on a top-level node (usually labelled with URLs). Currently, it supports an efficient Fetch by EClass method, that allows fetching all the instances of a type immediately, without having to load the rest of the model. Future versions of Hawk may expose additional operations through this menu. Finally, the EMF resource can be used normally from any EMF-based tools (e.g. transformation engines). However, to make the most out of the resources it will be necessary to extend the tools to have them integrate the efficient graph-based operations that are not part of the EMF Resource interface.","title":"Graph as EMF model"},{"location":"advanced-use/meta-queries/","text":"Hawk extends the regular EOL facilities to be able to query the metamodels registered within the instance: Model.files lists all the files indexed by Hawk (may be limited through the context). Model.metamodels lists all the metamodels registered in Hawk ( EPackage instances for EMF). Model.proxies lists all the proxy reference lists present in the graph. Each proxy reference list is a collection of the unresolved references from a model element node to the elements of a particular file. For details, please consult the advanced properties page . Model.types lists all the types registered in Hawk ( EClass instances for EMF). Model.getFileOf(obj) retrieves the first file containing the object obj . Model.getFilesOf(obj) retrieves all the files containing the object obj . Model.getProxies(repositoryPrefix) lists all the proxy reference lists for files in repositories matching the specified prefix. Model.getTypeOf(obj) retrieves the type of the object obj . Metamodels \u00b6 For a metamodel mm , these attributes are available: mm.dependencies lists the metamodels this metamodel depends on (usually at least the Ecore metamodel for EMF-based metamodels). mm.metamodelType is the type of metamodel that was registered. mm.node returns the underlying IGraphNode . mm.resource retrieves the original string representation for this metamodel (the original .ecore file for EMF). mm.types lists the types defined in this metamodel. mm.uri is the namespace URI of the metamodel. Types \u00b6 For a type t , these attributes are available: t.all retrieves all instances of that type efficiently (includes subtypes). t.attributes lists the attributes of the type, as slots (see below). t.features lists the attributes and references of the type. t.metamodel retrieves the metamodel that defines the type. t.name retrieves the name of the type. t.node returns the underlying IGraphNode . t.references lists the references of the type, as slots. Slots \u00b6 For a slot sl , these attributes are available: sl.name : name of the slot. sl.type : type of the value of the slot. sl.isMany : true if this is a multi-valued slot. sl.isOrdered : true if the values should follow some order. sl.isAttribute : true if this is an attribute slot. sl.isReference : true if this is a reference slot. sl.isUnique : true if the value for this slot should be unique within its model. Files \u00b6 For a file f , these attributes are available: f.contents : returns all the model elements in the file. f.node : returns the underlying IGraphNode . f.path : returns the path of the file within the repository (e.g. /input.xmi ). f.repository : returns the URL of the repository (e.g. file:///home/myuser/models ). f.roots : returns the root model elements in the file.","title":"Meta-level queries"},{"location":"advanced-use/meta-queries/#metamodels","text":"For a metamodel mm , these attributes are available: mm.dependencies lists the metamodels this metamodel depends on (usually at least the Ecore metamodel for EMF-based metamodels). mm.metamodelType is the type of metamodel that was registered. mm.node returns the underlying IGraphNode . mm.resource retrieves the original string representation for this metamodel (the original .ecore file for EMF). mm.types lists the types defined in this metamodel. mm.uri is the namespace URI of the metamodel.","title":"Metamodels"},{"location":"advanced-use/meta-queries/#types","text":"For a type t , these attributes are available: t.all retrieves all instances of that type efficiently (includes subtypes). t.attributes lists the attributes of the type, as slots (see below). t.features lists the attributes and references of the type. t.metamodel retrieves the metamodel that defines the type. t.name retrieves the name of the type. t.node returns the underlying IGraphNode . t.references lists the references of the type, as slots.","title":"Types"},{"location":"advanced-use/meta-queries/#slots","text":"For a slot sl , these attributes are available: sl.name : name of the slot. sl.type : type of the value of the slot. sl.isMany : true if this is a multi-valued slot. sl.isOrdered : true if the values should follow some order. sl.isAttribute : true if this is an attribute slot. sl.isReference : true if this is a reference slot. sl.isUnique : true if the value for this slot should be unique within its model.","title":"Slots"},{"location":"advanced-use/meta-queries/#files","text":"For a file f , these attributes are available: f.contents : returns all the model elements in the file. f.node : returns the underlying IGraphNode . f.path : returns the path of the file within the repository (e.g. /input.xmi ). f.repository : returns the URL of the repository (e.g. file:///home/myuser/models ). f.roots : returns the root model elements in the file.","title":"Files"},{"location":"advanced-use/oomph/","text":"Oomph has a feature that synchronizes preferences across workspaces (see bug 490549 ). This can be a problem if you expect different workspaces to have different Hawk indexes. If so, you should reconfigure Oomph so it does not record the /instance/org.hawk.osgiserver preferences node at the \"User\" and \"Installation\" levels. To do this, go to \"Window > Preferences\", select \"Oomph > Setup Tasks > Preference Recorder\", check \"Record into\", select \"User\" and make sure /instance/org.hawk.osgiserver/config either does not appear or is unchecked . It should be the same for \"Installation\" and \"Workspace\".","title":"Oomph and Hawk"},{"location":"advanced-use/temporal-queries/","text":"The latest versions of Hawk have the capability to index every version of all the models in the locations being monitored. To enable this capability, your Hawk index must meet certain conditions: You must be using a time-aware backend (currently, Greycat). You must be using the time-aware updater (TimeAwareModelUpdater) and not the standard one. You must be using the time-aware indexer factory and not the standard one (TimeAwareHawkFactory). You must query the index with a time-aware query language: org.hawk.timeaware.queries.TimeAwareEOLQueryEngine org.hawk.timeaware.queries.TimelineEOLQueryEngine If you meet these constraints, you can index a SVN repository with models and Hawk will turn the full history of every model into an integrated temporal graph database, or index a workspace/local folder and have Hawk remember the history of every model from then onwards. You will be able to query this temporal graph through an extension of Hawk's EOL dialect. This functionality was first discussed in our MRT 2018 paper, \"Reflecting on the past and the present with temporal graph-based models\". Data model \u00b6 The usual type -> model element graph in Hawk is extended to give both types and model elements their own histories. The histories are defined as follows: Types are immortal: they are created at the first endpoint in the graph and last to the \"end of time\" of the graph. There is a new version whenever an instance of the type is created or destroyed. Model elements are created at a certain timepoint, and either survive or are destroyed at another timepoint. Model elements are assumed to have a persistent identity: either its natural/artificial identifier, or its location within the model. New versions are produced when an attribute or a reference changes. Timepoints are provided by the Hawk connectors, and they tend to be commit timestamps or file timestamps. In SVN, these are commit timestamps to millisecond precision. Basic history traversal primitives \u00b6 The actual primitives are quite simple. In the time-aware dialect of Hawk, types and model elements expose the following additional attributes and operations: x.versions : returns the sequence of all versions for x , from newest to oldest x.getVersionsBetween(from, to) : versions within a range of timepoints x.getVersionsFrom(from) : versions from a timepoint (included) x.getVersionsUpTo(from) : versions up to a timepoint (included) x.earliest , x.latest : earliest / latest version x.next , x.prev / x.previous : next / previous version x.time : version timepoint Temporal assertions \u00b6 It is possible to evaluate assertions over the history of a type or model element: x.always(version | predicate over version) : true if and only if (\"iff\") the predicate is true for every version of x . x.never(version | predicate over version) : true iff the predicate is false for every version of x . x.eventually(version | predicate over version) : true iff the predicate is true for some version of x . x.eventuallyAtLeast(version | predicate over version, count) : true iff the predicate is true in at least count versions of x . x.eventuallyAtMost(version | predicate over version, count) : true iff the predicate is true in at least one version and at most count versions of x . Scoping views (predicate-based) \u00b6 The versions in scope for the above assertions and primitives can be limited with: x.since(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards. This esentially imposes a left-closed version interval. x.after(version | predicate over version) will return the type/model element in the timepoint immediately after the oldest timepoint for which the predicate holds, or null if it does not exist. It is essentially a variant of x.since that implements a left-open interval. x.until(version | predicate over version) will return the the same type/model element, but it will only report versions up to and including the first one for which the predicate holds, or null if such a version does not exist. This implements a right-closed version interval. x.before(version | predicate over version) will return the same type/model element, but it will only report versions before (excluding) the first one for which the predicate holds, or null if such a version does not exist. This implements a right-open interval. x.when(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards that match the predicate. This is a left-closed, filtered interval. Scoping views (context-based) \u00b6 You can also limit the available versions from an existing type / model element: x.sinceThen : version of x that will only report the versions from x onwards (included). x.afterThen : next version of x that will only report the versions after x (excluded). null if a next version does not exist. x.untilThen : version of x that will only report the versions up to x (included). x.beforeThen : previous version of x that will only report the versions before x (excluded). null if a previous version does not exist. You can undo the scoping with .unscoped . This will give you the same model element or type, but with all the versions available once more. Scoping views (based on derived attributes) \u00b6 Some of the events we may be interested in may be very rare. In long histories, it may be very expensive to find such rare events by iterating over all the versions of a model element. In these cases, it is possible to define a derived Boolean attribute (e.g. HasManyChildren for a Tree , with definiton return self.children.size > 100; ) on a type, and then use these additional operations: x.whenAnnotated('AttributeName') : returns a view of the model element x that exposes all the versions when the derived attribute named AttributeName defined on the type of x was true . The view will be at the earliest timepoint when this happened. x.sinceAnnotated('AttributeName') : equivalent to since , but using the derived attribute AttributeName . x.afterAnnotated('AttributeName') : equivalent to after . See above. x.untilAnnotated('AttributeName') : equivalent to until . See above. x.beforeAnnotated('AttributeName') : equivalent to before . See above. IMPORTANT : until #83 is resolved, you will need to define these derived attributes before you index any model versions. Global operations on the model \u00b6 The Model global reference is extended with new operations: Model.allInstancesNow returns all instances of the model at the timepoint equal to current system time. Model.allInstancesAt(timepoint) returns all instances of the model at the specified timepoint, measured in the integer amount of milliseconds elapsed since the epoch. Model.getRepository(object) will return a node representing the repository (VCS) that the object belongs to at its current timepoint. From the returned node, you may retrieve the .revision (SVN revision, folder timestamp or Git SHA-1), and the .message associated with the corresponding revision. Some examples \u00b6 A simple query to find the number of instances of X in the latest version of the model would be: return X.latest.all.size; If we want to do find the second last time that instances of X were created, we could write something like: return X.latest.prev.time; If we want to find an X that at some point had y greater than 0 and still survives to the latest revision, we could write something like: return X.latest.all.select(x|x.versions.exists(vx|vx.y > 0)); More advanced queries can be found in the Git repository for the MRT 2018 experiment tool . Timeline queries \u00b6 If you want to obtain the results of a certain query for all versions of a model, you can use the TimelineEOLQueryEngine instead. This operates by repeating the same query while changing the global timepoint of the graph, so you can write your query as a normal one and see how it evolves over time. For instance, if using return Model.allInstances.size; , you would see how the number of instances evolved over the various versions of the graph. NOTE: due to current implementation restrictions, this will only process versions where type nodes changed (i.e. objects were created or deleted). We plan to lift this restriction in the near future. Current limitations \u00b6 Subtree contexts, file-first/derived allOf and traversal scoping are not yet implemented for this query engine. File/repository patterns do work. Derived features will only work if added before any VCSes are added, and the impact of adding multiple VCS with their own histories has not been tested yet. Please make sure to report any issues!","title":"Temporal queries"},{"location":"advanced-use/temporal-queries/#data-model","text":"The usual type -> model element graph in Hawk is extended to give both types and model elements their own histories. The histories are defined as follows: Types are immortal: they are created at the first endpoint in the graph and last to the \"end of time\" of the graph. There is a new version whenever an instance of the type is created or destroyed. Model elements are created at a certain timepoint, and either survive or are destroyed at another timepoint. Model elements are assumed to have a persistent identity: either its natural/artificial identifier, or its location within the model. New versions are produced when an attribute or a reference changes. Timepoints are provided by the Hawk connectors, and they tend to be commit timestamps or file timestamps. In SVN, these are commit timestamps to millisecond precision.","title":"Data model"},{"location":"advanced-use/temporal-queries/#basic-history-traversal-primitives","text":"The actual primitives are quite simple. In the time-aware dialect of Hawk, types and model elements expose the following additional attributes and operations: x.versions : returns the sequence of all versions for x , from newest to oldest x.getVersionsBetween(from, to) : versions within a range of timepoints x.getVersionsFrom(from) : versions from a timepoint (included) x.getVersionsUpTo(from) : versions up to a timepoint (included) x.earliest , x.latest : earliest / latest version x.next , x.prev / x.previous : next / previous version x.time : version timepoint","title":"Basic history traversal primitives"},{"location":"advanced-use/temporal-queries/#temporal-assertions","text":"It is possible to evaluate assertions over the history of a type or model element: x.always(version | predicate over version) : true if and only if (\"iff\") the predicate is true for every version of x . x.never(version | predicate over version) : true iff the predicate is false for every version of x . x.eventually(version | predicate over version) : true iff the predicate is true for some version of x . x.eventuallyAtLeast(version | predicate over version, count) : true iff the predicate is true in at least count versions of x . x.eventuallyAtMost(version | predicate over version, count) : true iff the predicate is true in at least one version and at most count versions of x .","title":"Temporal assertions"},{"location":"advanced-use/temporal-queries/#scoping-views-predicate-based","text":"The versions in scope for the above assertions and primitives can be limited with: x.since(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards. This esentially imposes a left-closed version interval. x.after(version | predicate over version) will return the type/model element in the timepoint immediately after the oldest timepoint for which the predicate holds, or null if it does not exist. It is essentially a variant of x.since that implements a left-open interval. x.until(version | predicate over version) will return the the same type/model element, but it will only report versions up to and including the first one for which the predicate holds, or null if such a version does not exist. This implements a right-closed version interval. x.before(version | predicate over version) will return the same type/model element, but it will only report versions before (excluding) the first one for which the predicate holds, or null if such a version does not exist. This implements a right-open interval. x.when(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards that match the predicate. This is a left-closed, filtered interval.","title":"Scoping views (predicate-based)"},{"location":"advanced-use/temporal-queries/#scoping-views-context-based","text":"You can also limit the available versions from an existing type / model element: x.sinceThen : version of x that will only report the versions from x onwards (included). x.afterThen : next version of x that will only report the versions after x (excluded). null if a next version does not exist. x.untilThen : version of x that will only report the versions up to x (included). x.beforeThen : previous version of x that will only report the versions before x (excluded). null if a previous version does not exist. You can undo the scoping with .unscoped . This will give you the same model element or type, but with all the versions available once more.","title":"Scoping views (context-based)"},{"location":"advanced-use/temporal-queries/#scoping-views-based-on-derived-attributes","text":"Some of the events we may be interested in may be very rare. In long histories, it may be very expensive to find such rare events by iterating over all the versions of a model element. In these cases, it is possible to define a derived Boolean attribute (e.g. HasManyChildren for a Tree , with definiton return self.children.size > 100; ) on a type, and then use these additional operations: x.whenAnnotated('AttributeName') : returns a view of the model element x that exposes all the versions when the derived attribute named AttributeName defined on the type of x was true . The view will be at the earliest timepoint when this happened. x.sinceAnnotated('AttributeName') : equivalent to since , but using the derived attribute AttributeName . x.afterAnnotated('AttributeName') : equivalent to after . See above. x.untilAnnotated('AttributeName') : equivalent to until . See above. x.beforeAnnotated('AttributeName') : equivalent to before . See above. IMPORTANT : until #83 is resolved, you will need to define these derived attributes before you index any model versions.","title":"Scoping views (based on derived attributes)"},{"location":"advanced-use/temporal-queries/#global-operations-on-the-model","text":"The Model global reference is extended with new operations: Model.allInstancesNow returns all instances of the model at the timepoint equal to current system time. Model.allInstancesAt(timepoint) returns all instances of the model at the specified timepoint, measured in the integer amount of milliseconds elapsed since the epoch. Model.getRepository(object) will return a node representing the repository (VCS) that the object belongs to at its current timepoint. From the returned node, you may retrieve the .revision (SVN revision, folder timestamp or Git SHA-1), and the .message associated with the corresponding revision.","title":"Global operations on the model"},{"location":"advanced-use/temporal-queries/#some-examples","text":"A simple query to find the number of instances of X in the latest version of the model would be: return X.latest.all.size; If we want to do find the second last time that instances of X were created, we could write something like: return X.latest.prev.time; If we want to find an X that at some point had y greater than 0 and still survives to the latest revision, we could write something like: return X.latest.all.select(x|x.versions.exists(vx|vx.y > 0)); More advanced queries can be found in the Git repository for the MRT 2018 experiment tool .","title":"Some examples"},{"location":"advanced-use/temporal-queries/#timeline-queries","text":"If you want to obtain the results of a certain query for all versions of a model, you can use the TimelineEOLQueryEngine instead. This operates by repeating the same query while changing the global timepoint of the graph, so you can write your query as a normal one and see how it evolves over time. For instance, if using return Model.allInstances.size; , you would see how the number of instances evolved over the various versions of the graph. NOTE: due to current implementation restrictions, this will only process versions where type nodes changed (i.e. objects were created or deleted). We plan to lift this restriction in the near future.","title":"Timeline queries"},{"location":"advanced-use/temporal-queries/#current-limitations","text":"Subtree contexts, file-first/derived allOf and traversal scoping are not yet implemented for this query engine. File/repository patterns do work. Derived features will only work if added before any VCSes are added, and the impact of adding multiple VCS with their own histories has not been tested yet. Please make sure to report any issues!","title":"Current limitations"},{"location":"basic-use/core-concepts/","text":"Core concepts and general usage \u00b6 Components \u00b6 Hawk is an extensible system. Currently, it contains the following kinds of components: Type Role Current implementations Change listeners React to changes in the graph produced by the updaters Tracing, Validation Graph backends Integrate database technologies Neo4j , OrientDB , Greycat Model drivers Integrate modelling technologies Ecore , BPMN , Modelio , IFC2x3/IFC4 in this repo , and UML2 Query languages Translate high-level queries into efficient graph queries Epsilon Object Language , Epsilon Pattern Language , OrientDB SQL Updaters Update the graph based on the detected changes in the models and metamodels Built-in VCS managers Integrate file-based model repositories Local folders, SVN repositories, Git repositories, Eclipse workspaces, HTTP files General usage \u00b6 Using Hawk generally involves these steps: Create a new Hawk index, based on a specific backend (e.g. Neo4j or OrientDB). Add the required metamodels to the index. Add the model repositories to be monitored. Wait for the initial batch insert (may take some time in large repositories). Add the desired indexed and derived attributes. Perform fast and efficient queries on the graph, using one of the supported query languages (see table above). In the following sections, we will show how to perform these steps. Managing indexes with the Hawk view \u00b6 To manage and use Hawk indexes, first open the \"Hawk\" Eclipse view, using \"Window > Show View > Other... > Hawk > Hawk\". It should look like this: Hawk indexes are queried and managed from this view. From left to right, the buttons are: Query: opens the query dialog. Run: starts a Hawk index if it was stopped. Stop: stops a Hawk index if it was running. Sync: request the Hawk index to check the indexed repositories immediately. Delete: removes an index from the Hawk view, without deleting the actual database (it can be usually recovered later using the \"Import\" button). To remove a local index completely, select it and press Shift+Delete . New: creates a new index (more info below). Import: imports a Hawk index from a factory. Hawk itself only provides a \"local\" factory that looks at the subdirectories of the current Eclipse workspace. Configure: opens the index configuration dialog, which allows for managing the registered metamodels, the repositories to be indexed, the attributes to be derived and the attributes to be indexed. Creating a new index \u00b6 To create a new index, open the Hawk view and use the \"New\" button to open this dialog: The dialog requires these fields: Name: a descriptive name for the index. Only used as an identifier. Instance type: Hawk only supports local instances, but mondo-integration can add support for remote instances. Local storage folder: folder that will store the actual database. If the folder exists, Hawk will reuse that database instead of creating a new one. Remote location: only used for the remote instances in mondo-integrtion . Enabled plugins: list of plugins that are currently enabled in Hawk. Back-end: database backend to be used (currently either Neo4j or OrientDB). Min/max delay: minimum and maximum delays in milliseconds between synchronisations. Hawk will start at the minimum value: every time it does not find any changes, it will double the delay up to the maximum value. If it finds a change, it will reset back to the minimum value. Periodic synchronisation can be completely disabled by changing the minimum and maximum delays to 0: in this mode, Hawk will only synchronise on startup, when a repository is added or when the user requests it manually. Once these fields have been filled in, Hawk will create and set up the index in a short period. Managing metamodels \u00b6 After creating the index, the next step is to register the metamodels of the models that are going to be indexed. To do this, select the index in the Hawk view and either double click it or click on the \"Configure\" button. The configure dialog will open: The configure dialog has several tabs. For managing metamodels, we need to go to the \"Metamodels\" tab. It will list the URIs of the currently registered metamodels. If a metamodel we need is not listed there, we can use the \"Add\" button to provide Hawk with the appropriate file to be indexed (e.g. the .ecore file for EMF-based models, or the metamodel-descriptor.xml for Modelio-based models). We can also \"Remove\" metamodels: this will remove all dependent models and metamodels as well. To try out Hawk, we recommend adding the JDTAST.ecore metamodel, which was used in the GraBaTs 2009 case study from AtlanMod . For Modelio metamodels, use the metamodel-descriptor.xml for Modelio 3.6 projects (for older projects, use the older descriptors included as metamodel_*.xml files in the Modelio 3.6 sources ). Keep in mind that metamodels may have dependencies to others. You will need to either add all metamodels at once, or add each metamodel after those it depends upon. If adding all the metamodels at once, Hawk will rearrange their addition taking into account their mutual dependencies. Note: the EMF driver can parse regular Ecore metamodels with the .ecore extension. Note: regarding the Modelio metamodel-descriptor.xml files, you can find those as part of the Modelio source code . Managing repositories \u00b6 Having added the metamodels of the models to be indexed, the following step is to add the repositories to be indexed. To do so, go to the \"Indexed Locations\" tab of the Hawk configure dialog, and use the \"Add\" button. Hawk will present the following dialog: The fields to be used are as follows: Type: type of repository to be indexed. Location: URL or path to the repository. For local folders, it is recommended to use the \"Browse...\" button to produce the adequate `file://** URL. For SVN, it is best to copy and paste the full URL. For Git repositories, you can use a path to the root folder of your Git clone, or a file://path/to/repo[?branch=BRANCH] URL (where the optional ?branch=BRANCH part can be used to specify a branch other than the one currently checked out). For Workspace repositories, the location is irrelevant: selecting any directory from \"Browse...\" will work just the same. User + pass: for private SVN repositories, these will be the username and password to be used to connect to the repository. Hawk will store the password on the Eclipse secure storage. To try out Hawk, after adding the JDTAST.ecore metamodel from the previous section, we recommend adding a folder with a copy of the set0.xmi file. It has around 70k model elements. To watch over the indexing process, look at the \"Error Log\" view or run Eclipse with the -console option. The supported file extensions are as follows: Driver Extensions EMF .xmi , .model , any extensions in the EMF extension factory map, any extensions mentioned through the org.hawk.emf.model.extraExtensions Java system property (e.g. -Dorg.hawk.emf.model.extraExtensions=.railway,.rail ). UML2 .uml . .profile.uml files can be indexed normally and also registered as metamodels. BPMN .bpmn , .bpmn2 . Modelio .exml , .ramc . Parses mmversion.dat internally for metadata. IFC .ifc , .ifcxml , .ifc.txt , .ifcxml.txt , .ifc.zip , .ifczip . Managing indexed attributes \u00b6 Simply indexing the models into the graph will already speed up considerably some common queries, such as finding all the instances of a type: in Hawk, this is done through direct edge traversal instead of going through the entire model. However, queries that filter model elements through the value of their attributes will need additional indexing to be set up. For instance, if we wanted to speed up return Class.all.selectOne(c|c.name='MyClass'); (which returns the class named \"MyClass\"), we would need to index the name attribute in the Class type. To do so, we need to go to the Hawk configure dialog, select the \"Indexed Attributes\" tab and press the \"Add\" button. This dialog will open: Its fields are as follows: Metamodel URI: the URI of the metamodel that has the type we want. Type Name: the name of the type (here \"Class\"). Attribute Name: the name of the attribute to be indexed (here \"name\"). Please allow some time after the dialog is closed to have Hawk generate the index. Currently, Hawk can index attributes with strings, booleans and numbers. Indexing will speed up not only = , but also > and all the other relational operators. Managing derived attributes \u00b6 Sometimes we will need to filter model elements through a piece of information that is not directly stored among its attributes, but is rather computed from them. To speed up the process, Hawk can keep precompute these derived attributes in the graph, keeping them up to date and indexing them. For instance, if we wanted to quickly filter UML classes by their number of operations, we would go to the Hawk configure dialog, select the \"Derived Attributes\" tab and click on the \"Add\" button. This dialog would appear: The fields are as follows: Metamodel URI: the URI of the metamodel with the type to be extended. Type Name: the name of the type we are going to extend. Attribute Name: the name of the new derived attribute (should be unique). Attribute Type: the type of the new derived attribute. isMany: true if this is a collection of values, false otherwise. isOrdered: true if this is an ordered collection of values, false otherwise. isUnique: true if the value should provide a unique identifier, false otherwise. Derivation Language: query language that the derivation logic will be written on. EOL is the default choice. Derivation Logic: expression in the chosen language that will compute the value. Hawk provides the self variable to access the model element being extended. For this particular example, we'd set the fields like this: Metamodel URI: the URI of the UML metamodel. Type Name: Class. Attribute Name: ownedOperationCount. Attribute Type: Integer. isMany, isOrdered, isUnique: false. Derivation Language: EOLQueryEngine. Derivation Logic: return self.ownedOperation.size; . After pressing OK, Hawk will spend some time computing the derived attribute and indexing the value. After that, queries such as return Class.all.select(c|c.ownedOperationCount > 20); will complete much faster. Querying the graph \u00b6 To query the indexed models, use the \"Query\" button of the Hawk view. This dialog will open: The actual query can be entered through the \"Query\" field manually, or loaded from a file using the \"Query File\" button. The query should be written in the language selected in \"Query Engine\". The scope of the query can be limited using the \"Context Repositories\" and \"Context Files\" fields: for instance, using set1.xmi on the \"Context Files\" field would limit it to the contents of the set1.xmi file. Running the query with \"Run Query\" button will place the results on the \"Result\" field.","title":"Core concepts"},{"location":"basic-use/core-concepts/#core-concepts-and-general-usage","text":"","title":"Core concepts and general usage"},{"location":"basic-use/core-concepts/#components","text":"Hawk is an extensible system. Currently, it contains the following kinds of components: Type Role Current implementations Change listeners React to changes in the graph produced by the updaters Tracing, Validation Graph backends Integrate database technologies Neo4j , OrientDB , Greycat Model drivers Integrate modelling technologies Ecore , BPMN , Modelio , IFC2x3/IFC4 in this repo , and UML2 Query languages Translate high-level queries into efficient graph queries Epsilon Object Language , Epsilon Pattern Language , OrientDB SQL Updaters Update the graph based on the detected changes in the models and metamodels Built-in VCS managers Integrate file-based model repositories Local folders, SVN repositories, Git repositories, Eclipse workspaces, HTTP files","title":"Components"},{"location":"basic-use/core-concepts/#general-usage","text":"Using Hawk generally involves these steps: Create a new Hawk index, based on a specific backend (e.g. Neo4j or OrientDB). Add the required metamodels to the index. Add the model repositories to be monitored. Wait for the initial batch insert (may take some time in large repositories). Add the desired indexed and derived attributes. Perform fast and efficient queries on the graph, using one of the supported query languages (see table above). In the following sections, we will show how to perform these steps.","title":"General usage"},{"location":"basic-use/core-concepts/#managing-indexes-with-the-hawk-view","text":"To manage and use Hawk indexes, first open the \"Hawk\" Eclipse view, using \"Window > Show View > Other... > Hawk > Hawk\". It should look like this: Hawk indexes are queried and managed from this view. From left to right, the buttons are: Query: opens the query dialog. Run: starts a Hawk index if it was stopped. Stop: stops a Hawk index if it was running. Sync: request the Hawk index to check the indexed repositories immediately. Delete: removes an index from the Hawk view, without deleting the actual database (it can be usually recovered later using the \"Import\" button). To remove a local index completely, select it and press Shift+Delete . New: creates a new index (more info below). Import: imports a Hawk index from a factory. Hawk itself only provides a \"local\" factory that looks at the subdirectories of the current Eclipse workspace. Configure: opens the index configuration dialog, which allows for managing the registered metamodels, the repositories to be indexed, the attributes to be derived and the attributes to be indexed.","title":"Managing indexes with the Hawk view"},{"location":"basic-use/core-concepts/#creating-a-new-index","text":"To create a new index, open the Hawk view and use the \"New\" button to open this dialog: The dialog requires these fields: Name: a descriptive name for the index. Only used as an identifier. Instance type: Hawk only supports local instances, but mondo-integration can add support for remote instances. Local storage folder: folder that will store the actual database. If the folder exists, Hawk will reuse that database instead of creating a new one. Remote location: only used for the remote instances in mondo-integrtion . Enabled plugins: list of plugins that are currently enabled in Hawk. Back-end: database backend to be used (currently either Neo4j or OrientDB). Min/max delay: minimum and maximum delays in milliseconds between synchronisations. Hawk will start at the minimum value: every time it does not find any changes, it will double the delay up to the maximum value. If it finds a change, it will reset back to the minimum value. Periodic synchronisation can be completely disabled by changing the minimum and maximum delays to 0: in this mode, Hawk will only synchronise on startup, when a repository is added or when the user requests it manually. Once these fields have been filled in, Hawk will create and set up the index in a short period.","title":"Creating a new index"},{"location":"basic-use/core-concepts/#managing-metamodels","text":"After creating the index, the next step is to register the metamodels of the models that are going to be indexed. To do this, select the index in the Hawk view and either double click it or click on the \"Configure\" button. The configure dialog will open: The configure dialog has several tabs. For managing metamodels, we need to go to the \"Metamodels\" tab. It will list the URIs of the currently registered metamodels. If a metamodel we need is not listed there, we can use the \"Add\" button to provide Hawk with the appropriate file to be indexed (e.g. the .ecore file for EMF-based models, or the metamodel-descriptor.xml for Modelio-based models). We can also \"Remove\" metamodels: this will remove all dependent models and metamodels as well. To try out Hawk, we recommend adding the JDTAST.ecore metamodel, which was used in the GraBaTs 2009 case study from AtlanMod . For Modelio metamodels, use the metamodel-descriptor.xml for Modelio 3.6 projects (for older projects, use the older descriptors included as metamodel_*.xml files in the Modelio 3.6 sources ). Keep in mind that metamodels may have dependencies to others. You will need to either add all metamodels at once, or add each metamodel after those it depends upon. If adding all the metamodels at once, Hawk will rearrange their addition taking into account their mutual dependencies. Note: the EMF driver can parse regular Ecore metamodels with the .ecore extension. Note: regarding the Modelio metamodel-descriptor.xml files, you can find those as part of the Modelio source code .","title":"Managing metamodels"},{"location":"basic-use/core-concepts/#managing-repositories","text":"Having added the metamodels of the models to be indexed, the following step is to add the repositories to be indexed. To do so, go to the \"Indexed Locations\" tab of the Hawk configure dialog, and use the \"Add\" button. Hawk will present the following dialog: The fields to be used are as follows: Type: type of repository to be indexed. Location: URL or path to the repository. For local folders, it is recommended to use the \"Browse...\" button to produce the adequate `file://** URL. For SVN, it is best to copy and paste the full URL. For Git repositories, you can use a path to the root folder of your Git clone, or a file://path/to/repo[?branch=BRANCH] URL (where the optional ?branch=BRANCH part can be used to specify a branch other than the one currently checked out). For Workspace repositories, the location is irrelevant: selecting any directory from \"Browse...\" will work just the same. User + pass: for private SVN repositories, these will be the username and password to be used to connect to the repository. Hawk will store the password on the Eclipse secure storage. To try out Hawk, after adding the JDTAST.ecore metamodel from the previous section, we recommend adding a folder with a copy of the set0.xmi file. It has around 70k model elements. To watch over the indexing process, look at the \"Error Log\" view or run Eclipse with the -console option. The supported file extensions are as follows: Driver Extensions EMF .xmi , .model , any extensions in the EMF extension factory map, any extensions mentioned through the org.hawk.emf.model.extraExtensions Java system property (e.g. -Dorg.hawk.emf.model.extraExtensions=.railway,.rail ). UML2 .uml . .profile.uml files can be indexed normally and also registered as metamodels. BPMN .bpmn , .bpmn2 . Modelio .exml , .ramc . Parses mmversion.dat internally for metadata. IFC .ifc , .ifcxml , .ifc.txt , .ifcxml.txt , .ifc.zip , .ifczip .","title":"Managing repositories"},{"location":"basic-use/core-concepts/#managing-indexed-attributes","text":"Simply indexing the models into the graph will already speed up considerably some common queries, such as finding all the instances of a type: in Hawk, this is done through direct edge traversal instead of going through the entire model. However, queries that filter model elements through the value of their attributes will need additional indexing to be set up. For instance, if we wanted to speed up return Class.all.selectOne(c|c.name='MyClass'); (which returns the class named \"MyClass\"), we would need to index the name attribute in the Class type. To do so, we need to go to the Hawk configure dialog, select the \"Indexed Attributes\" tab and press the \"Add\" button. This dialog will open: Its fields are as follows: Metamodel URI: the URI of the metamodel that has the type we want. Type Name: the name of the type (here \"Class\"). Attribute Name: the name of the attribute to be indexed (here \"name\"). Please allow some time after the dialog is closed to have Hawk generate the index. Currently, Hawk can index attributes with strings, booleans and numbers. Indexing will speed up not only = , but also > and all the other relational operators.","title":"Managing indexed attributes"},{"location":"basic-use/core-concepts/#managing-derived-attributes","text":"Sometimes we will need to filter model elements through a piece of information that is not directly stored among its attributes, but is rather computed from them. To speed up the process, Hawk can keep precompute these derived attributes in the graph, keeping them up to date and indexing them. For instance, if we wanted to quickly filter UML classes by their number of operations, we would go to the Hawk configure dialog, select the \"Derived Attributes\" tab and click on the \"Add\" button. This dialog would appear: The fields are as follows: Metamodel URI: the URI of the metamodel with the type to be extended. Type Name: the name of the type we are going to extend. Attribute Name: the name of the new derived attribute (should be unique). Attribute Type: the type of the new derived attribute. isMany: true if this is a collection of values, false otherwise. isOrdered: true if this is an ordered collection of values, false otherwise. isUnique: true if the value should provide a unique identifier, false otherwise. Derivation Language: query language that the derivation logic will be written on. EOL is the default choice. Derivation Logic: expression in the chosen language that will compute the value. Hawk provides the self variable to access the model element being extended. For this particular example, we'd set the fields like this: Metamodel URI: the URI of the UML metamodel. Type Name: Class. Attribute Name: ownedOperationCount. Attribute Type: Integer. isMany, isOrdered, isUnique: false. Derivation Language: EOLQueryEngine. Derivation Logic: return self.ownedOperation.size; . After pressing OK, Hawk will spend some time computing the derived attribute and indexing the value. After that, queries such as return Class.all.select(c|c.ownedOperationCount > 20); will complete much faster.","title":"Managing derived attributes"},{"location":"basic-use/core-concepts/#querying-the-graph","text":"To query the indexed models, use the \"Query\" button of the Hawk view. This dialog will open: The actual query can be entered through the \"Query\" field manually, or loaded from a file using the \"Query File\" button. The query should be written in the language selected in \"Query Engine\". The scope of the query can be limited using the \"Context Repositories\" and \"Context Files\" fields: for instance, using set1.xmi on the \"Context Files\" field would limit it to the contents of the set1.xmi file. Running the query with \"Run Query\" button will place the results on the \"Result\" field.","title":"Querying the graph"},{"location":"basic-use/examples-modelio/","text":"Example queries on Modelio models \u00b6 This article shows several example queries on Modelio projects. The Modelio model driver does not use the XMI export in Modelio: instead, it parses .exml files directly (which might be contained in .ramc files) and understands metamodels described in Modelio metamodel_descriptor.xml files. (To obtain one, download the source code for your Modelio version and search within it. Here is a copy of the one used for Modelio 3.6.) All the queries are written in the Epsilon Object Language , and assume that the toy Zoo Modelio project has been indexed. The queries are based on those in [[the XMI-based UML examples page|Example queries on XMI based UML models]]. The underlying UML model looks like this: To avoid ambiguity in type names, the default namespaces list in the query dialog should include modelio://uml::statik . All instances of a type \u00b6 Returns the number of instances of \"Class\" in the index: return Class . all . size ; Metamodel URI for the \"Class\" type \u00b6 Returns the URI of the metamodel that contains the \"Class\" type ( modelio://uml::statik ): return Model . types . selectOne ( t | t . name = 'Class' ). metamodel . uri ; Reference slots in a type \u00b6 Returns the reference slots in the type \"Class\": return Model . types . select ( t | t . name = 'Class' ). references ; Reference traversal \u00b6 Returns the superclass of \"Zebra\" by navigating the \"Parent\" and \"SuperType\" associations present in the Modelio metamodel: return Class . all . selectOne ( c | c . Name = 'Zebra' ) . Parent . SuperType . Name ; Reverse reference traversal \u00b6 Returns the subclasses of \"Animal\", using the revRefNav_ to navigate references in reverse: return Class . all . selectOne ( c | c . Name = 'Animal' ) . revRefNav_SuperType . revRefNav_Parent . Name ; Range queries with indexed or derived integer attributes \u00b6 This example requires adding a derived attribute first: Metamodel URI: modelio://uml::statik Type Name: Class Attribute Name: ownedOperationCount Attribute Type: Integer isMany, isOrdered, isUnique: false Derivation Language: EOLQueryEngine Derivation Logic: return self.OwnedOperation.size; After it has been added, this query will return the classes that have one or more operations: return Class . all . select ( c | c . ownedOperationCount > 0 ). Name ; Advanced example: loops, variables and custom operations \u00b6 This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own: var counts = Sequence {}; var i = 0 ; var n = count ( 0 ); while ( n > 0 ) { counts . add ( Sequence { \">\" + i , n }); i = i + 1 ; n = count ( i ); } return counts ; operation count ( n ) { return Class . all . select ( c | c . ownedOperationCount > n ). size ; }","title":"Examples (Modelio)"},{"location":"basic-use/examples-modelio/#example-queries-on-modelio-models","text":"This article shows several example queries on Modelio projects. The Modelio model driver does not use the XMI export in Modelio: instead, it parses .exml files directly (which might be contained in .ramc files) and understands metamodels described in Modelio metamodel_descriptor.xml files. (To obtain one, download the source code for your Modelio version and search within it. Here is a copy of the one used for Modelio 3.6.) All the queries are written in the Epsilon Object Language , and assume that the toy Zoo Modelio project has been indexed. The queries are based on those in [[the XMI-based UML examples page|Example queries on XMI based UML models]]. The underlying UML model looks like this: To avoid ambiguity in type names, the default namespaces list in the query dialog should include modelio://uml::statik .","title":"Example queries on Modelio models"},{"location":"basic-use/examples-modelio/#all-instances-of-a-type","text":"Returns the number of instances of \"Class\" in the index: return Class . all . size ;","title":"All instances of a type"},{"location":"basic-use/examples-modelio/#metamodel-uri-for-the-class-type","text":"Returns the URI of the metamodel that contains the \"Class\" type ( modelio://uml::statik ): return Model . types . selectOne ( t | t . name = 'Class' ). metamodel . uri ;","title":"Metamodel URI for the \"Class\" type"},{"location":"basic-use/examples-modelio/#reference-slots-in-a-type","text":"Returns the reference slots in the type \"Class\": return Model . types . select ( t | t . name = 'Class' ). references ;","title":"Reference slots in a type"},{"location":"basic-use/examples-modelio/#reference-traversal","text":"Returns the superclass of \"Zebra\" by navigating the \"Parent\" and \"SuperType\" associations present in the Modelio metamodel: return Class . all . selectOne ( c | c . Name = 'Zebra' ) . Parent . SuperType . Name ;","title":"Reference traversal"},{"location":"basic-use/examples-modelio/#reverse-reference-traversal","text":"Returns the subclasses of \"Animal\", using the revRefNav_ to navigate references in reverse: return Class . all . selectOne ( c | c . Name = 'Animal' ) . revRefNav_SuperType . revRefNav_Parent . Name ;","title":"Reverse reference traversal"},{"location":"basic-use/examples-modelio/#range-queries-with-indexed-or-derived-integer-attributes","text":"This example requires adding a derived attribute first: Metamodel URI: modelio://uml::statik Type Name: Class Attribute Name: ownedOperationCount Attribute Type: Integer isMany, isOrdered, isUnique: false Derivation Language: EOLQueryEngine Derivation Logic: return self.OwnedOperation.size; After it has been added, this query will return the classes that have one or more operations: return Class . all . select ( c | c . ownedOperationCount > 0 ). Name ;","title":"Range queries with indexed or derived integer attributes"},{"location":"basic-use/examples-modelio/#advanced-example-loops-variables-and-custom-operations","text":"This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own: var counts = Sequence {}; var i = 0 ; var n = count ( 0 ); while ( n > 0 ) { counts . add ( Sequence { \">\" + i , n }); i = i + 1 ; n = count ( i ); } return counts ; operation count ( n ) { return Class . all . select ( c | c . ownedOperationCount > n ). size ; }","title":"Advanced example: loops, variables and custom operations"},{"location":"basic-use/examples-xmi/","text":"Example queries on XMI models \u00b6 These are some sample queries that can be done on any set of indexed XMI-based UML models, assuming that Class::name has been added as an indexed attribute and Class::ownedOperationCount has been defined as a derived attribute (as showed in [[Basic concepts and usage]]). All the queries are written in the Epsilon Object Language . In order to index XMI-based UML models, you only need to enable the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when you create a new Hawk instance, and ensure your files have the .uml extension. If you are using any predefined UML data types, you may also want to add a PredefinedUMLLibraries location inside \"Indexed Locations\": that will integrate those predefined objects into the Hawk graph, allowing you to reference them on queries. The rest of this article will run on this toy XMI-based UML file , which was exported from this Modelio 3.2.1 project : To avoid ambiguity in type names, the default namespaces list in the query dialog should include the UML metamodel URI ( http://www.eclipse.org/uml2/5.0.0/UML for the above UML.ecore file). All instances of a type \u00b6 return Class.all.size; Returns the total number of classes within the specified scope. If you leave \"Context Files\" empty, it'll count all the classes in all the projects. If you put \"*OSS.modelio.zip\" in \"Context Files\", it'll count only the classes within the OSS project. This is faster than going through the model because we can go to the Class node and then simply count all the incoming edges with label \"ofType\". Reference slots in a type \u00b6 return Model.types.select(t|t.name='Class').references; Gives you all the reference slots in the UML \"Class\" type. This is an example of the queries that can be performed at the \"meta\" level: more details are available in [[Meta level queries in Hawk]]. The query dialog with the result would look like this: Reference traversal \u00b6 return Class.all .select(c|c.qualifiedName='zoo::Zebra') .superClass.flatten.name; Gives you the names of all the superclasses of class Zebra within model zoo . Reverse reference traversal \u00b6 return Class.all .select(c|c.qualifiedName='zoo::Animal') .revRefNav_superClass.flatten.name; Gives the names of all the sub classes of Animal (follows \"superClass\" in reverse). The UML metamodel doesn't have \"subclass\" links, but we can use Hawk's automatic support for reverse traversal of references. In general, if x.e is a reference, we can follow it in reverse with x.revRefNav_e . We can also access containers using x.eContainer . Range queries with indexed or derived integer attributes \u00b6 return Class.all.select(c|c.ownedOperationCount > 0).name; Finds the names of the classes with at least one operation of their own. Advanced example: loops, variables and custom operations \u00b6 var counts = Sequence {}; var i = 0; var n = count(0); while (n > 0) { counts.add(Sequence {\">\" + i, n}); i = i + 1; n = count(i); } return counts; operation count(n) { return Class.all.select(c|c.ownedOperationCount > n).size; } This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own.","title":"Examples (XMI)"},{"location":"basic-use/examples-xmi/#example-queries-on-xmi-models","text":"These are some sample queries that can be done on any set of indexed XMI-based UML models, assuming that Class::name has been added as an indexed attribute and Class::ownedOperationCount has been defined as a derived attribute (as showed in [[Basic concepts and usage]]). All the queries are written in the Epsilon Object Language . In order to index XMI-based UML models, you only need to enable the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when you create a new Hawk instance, and ensure your files have the .uml extension. If you are using any predefined UML data types, you may also want to add a PredefinedUMLLibraries location inside \"Indexed Locations\": that will integrate those predefined objects into the Hawk graph, allowing you to reference them on queries. The rest of this article will run on this toy XMI-based UML file , which was exported from this Modelio 3.2.1 project : To avoid ambiguity in type names, the default namespaces list in the query dialog should include the UML metamodel URI ( http://www.eclipse.org/uml2/5.0.0/UML for the above UML.ecore file).","title":"Example queries on XMI models"},{"location":"basic-use/examples-xmi/#all-instances-of-a-type","text":"return Class.all.size; Returns the total number of classes within the specified scope. If you leave \"Context Files\" empty, it'll count all the classes in all the projects. If you put \"*OSS.modelio.zip\" in \"Context Files\", it'll count only the classes within the OSS project. This is faster than going through the model because we can go to the Class node and then simply count all the incoming edges with label \"ofType\".","title":"All instances of a type"},{"location":"basic-use/examples-xmi/#reference-slots-in-a-type","text":"return Model.types.select(t|t.name='Class').references; Gives you all the reference slots in the UML \"Class\" type. This is an example of the queries that can be performed at the \"meta\" level: more details are available in [[Meta level queries in Hawk]]. The query dialog with the result would look like this:","title":"Reference slots in a type"},{"location":"basic-use/examples-xmi/#reference-traversal","text":"return Class.all .select(c|c.qualifiedName='zoo::Zebra') .superClass.flatten.name; Gives you the names of all the superclasses of class Zebra within model zoo .","title":"Reference traversal"},{"location":"basic-use/examples-xmi/#reverse-reference-traversal","text":"return Class.all .select(c|c.qualifiedName='zoo::Animal') .revRefNav_superClass.flatten.name; Gives the names of all the sub classes of Animal (follows \"superClass\" in reverse). The UML metamodel doesn't have \"subclass\" links, but we can use Hawk's automatic support for reverse traversal of references. In general, if x.e is a reference, we can follow it in reverse with x.revRefNav_e . We can also access containers using x.eContainer .","title":"Reverse reference traversal"},{"location":"basic-use/examples-xmi/#range-queries-with-indexed-or-derived-integer-attributes","text":"return Class.all.select(c|c.ownedOperationCount > 0).name; Finds the names of the classes with at least one operation of their own.","title":"Range queries with indexed or derived integer attributes"},{"location":"basic-use/examples-xmi/#advanced-example-loops-variables-and-custom-operations","text":"var counts = Sequence {}; var i = 0; var n = count(0); while (n > 0) { counts.add(Sequence {\">\" + i, n}); i = i + 1; n = count(i); } return counts; operation count(n) { return Class.all.select(c|c.ownedOperationCount > n).size; } This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own.","title":"Advanced example: loops, variables and custom operations"},{"location":"basic-use/installation/","text":"Hawk can be used as a regular Java library (to be embedded within another Java program) or as a set of plugins for the Eclipse IDE. To install most of Hawk's Eclipse plugins, point your installation to this update site, which is kept up to date automatically using Travis: https://download.eclipse.org/hawk/2.1.0/updates/ This is a composite update site, which contains not only Hawk, but also its dependencies. Simply check all the categories that start with \"Hawk\". Some of the components in Hawk cannot be redistributed in binary form due to incompatible licenses. You will need to build the update site for these restricted components yourself: please consult the developer resources in the wiki to do that.","title":"Installation"},{"location":"basic-use/papyrus/","text":"Hawk includes specific support for MDT2 UML models and UML profiles developed using Papyrus UML. This can be used by enabling the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when creating a Hawk instance. The implementation mostly reuses MDT UML2 and Papyrus UML as-is, in order to maximize compatibility. There are some minor caveats, which are documented in this page. Supported file extensions \u00b6 Hawk indexes plain UML2 models with the .uml extension, and Papyrus profiles with the .profile.uml extension. It does not index .di nor .notation files at the moment, as these do not provide semantic information. .xmi files are not indexed by the Hawk UML components, to avoid conflicts with the plain EMF support (matching the file to the proper model resource is done strictly by file extension). You are recommended to rename your UML2 XMI files to .uml for now. Predefined UML packages \u00b6 UML2 provides an implementation of the UML standard libraries, with packages containing some common datatypes (e.g. String or Integer). If your models use any of these libraries, we heavily recommend that you add a PredefinedUMLLibraries component in your \"Indexed Locations\" section. Otherwise, any references from your models to the libraries will be left unresolved, and you will not be able to use those predefined entities in your queries. This is because Hawk operates normally on files, and the predefined UML libraries are generally bundled within the UML2 plugins. The PredefinedUMLLibraries exposes those bundled resources to Hawk in a way that is transparent to the querying language. Multi-version Papyrus UML profile support \u00b6 Beyond registering all the metamodels required to index plain UML models, the UML metamodel resource factory in Hawk can register .profile.uml files as metamodels. This allows us to index UML models with custom profiles in Hawk. Since UML profiles can be versioned, Hawk will register version X.Y.Z of profile with URI http://your/profile with http://your/profile/X.Y.Z as the URI. When querying with Hawk, you will have to specify http://your/profile/X.Y.Z in your default namespaces, in order to resolve the ambiguity that may exist between multiple versions of the same metamodel. If a new version of the UML profile is created, you will need to register the .profile.uml file again with Hawk before it can index models that use that version of the profile. Hawk treats entities of different versions of the same profile as entirely different types. In terms of implementation details, Hawk takes advantage of the fact that .profile.uml files contain a collection of Ecore EPackages . Hawk simply adds the /X.Y.Z version suffix to their namespace URI, and otherwise leaves them untouched. Example: using Hawk to index all UML models in an Eclipse workspace \u00b6 We will show how Hawk can be used to index all the UML models in an Eclipse workspace, including those that have custom profiles applied to them. To illustrate our approach, we will use these toy models created with Papyrus. We assume that you have installed Hawk into your Eclipse instance, following the steps in [[this wiki page|Installation]]. Models \u00b6 The model is a very simple UML class diagram: It only has two classes, one of which has the <<Special>> stereotype with a priority property equal to 23. This value is not shown in the diagram, but it can be checked from the \"Profile\" page of the \"Properties\" view when the class is selected. The profile including the <<Special>> stereotype is also very simple: The diagram imports the Class UML metaclass, and then extends it with the <<Special>> stereotype. Creating the Hawk index \u00b6 Before we can run any queries, we need to create a Hawk index. If we have installed Hawk correctly, we will be able to open the \"Hawk\" view and see something like this: Right now, we have no indexes in Hawk. We need to press the \"Add\" button, which is highlighted in red above. We should see a dialog similar to this: Important points: We can pick any name we want, as long as it is unique. Instance type should be a LocalHawkFactory if we intend to index our workspace. The Local storage folder will contain some of the configuration of that Hawk instance, and the database. Remote location is irrelevant when using the LocalHawkFactory . If we are only interested in indexing the UML models in the workspace, it is a good idea to Disable all the plugins and then check only the UML metamodel and model resource factories. You can choose to use Neo4j (if you [[build it on your own|Running from source]]), OrientDB, or any other backend we may support in the future. Min/Max Delay indicate how often will Hawk poll all the indexed locations. If you are only indexing the current workspace, you can leave both at 0 to disable polling: regardless of this setting, Hawk will react automatically whenever something in the workspace changes. Once the index has been created, you should see an entry for it in the \"Hawk\" view: Adding metamodels and models \u00b6 From the screenshot above, we know that the index is RUNNING (available for queries) and not UPDATING nor STOPPED , so we can start configuring it as we need. First, we should double click on it to open the configuration dialog: We should go to the \"Metamodels\" tab and click on \"Add...\", then select the specialThings.profile/model.profile.uml file. Hawk will register our custom profile as a metamodel, and we will be ready to index models using all the versions of this profile so far. Should we define any newer versions, we will have to add the file again to Hawk. The dialog will now list the new metamodel: Now we are ready to add the locations where the models to be indexed are stored. We go to the \"Indexed Locations\" tab and click on \"Add\". First, we will add the predefined UML libraries with some commonly used instances (e.g. UML data types): We need to pick the right \"Type\", and then enter / in the \"Location\" field. The location is ignored for this repository, but due to current limitations in the UI we have to enter something in the field. Next, we have to tell Hawk to index all the models in the workspace. We will \"Add\" another location, and this time fill the dialog like this: Again, the / \"Location\" is irrelevant but required by the UI. Hawk will spend some time UPDATING , and once it is RUNNING again we will be ready to run some queries on it. Querying Hawk \u00b6 We can finally query Hawk now. To do so, we need to select our index on the \"Hawk\" view and click on the \"Query\" button, which looks like a magnifying glass: We will see a dialog like this one, with all fields empty: Enter the query return Class.all.name; and click on the \"Run Query\" button. This query lists the names of all the classes indexed so far by Hawk. You will notice that we obtain these results: [E, T, MyClass, Special, V, NotSoSpecial, Stereotype1, K, E] The E/T/V/K/E classes came from the predefined UML libraries. If you want only the results from your workspace, you must tell Hawk through the \"Context Repositories\" field, by entering platform:/resource . This is the base URI used by Hawk to identify all the files in your workspace. Click on \"Run Query\" again, and you should obtain the results shown in the screenshot: [MyClass, Stereotype1, Special, NotSoSpecial] Note how the query also returns the classes in the profile. Should you want to avoid this, you can either use the \"Context Files\" field ( *model.uml will do this) to further restrict the scope of the query. Finding UML objects by stereotype \u00b6 If you would like to find all applications of stereotype X , you can simply use X.all and then use base_Metaclass to find the object that was annotated with that stereotype. For instance, this query will find the name of all the classes that had the <<Special>> stereotype applied to them: return Special.all.base_Class.name; You will get: [MyClass] You can also access stereotype properties: return Special.all.collect(s| Sequence { s.priority, s.base_Class.name } ).asSequence; This will produce: [[23, MyClass]] Finding stereotype applications from the UML object \u00b6 If you want to go the other way around, you can use reverse reference navigation on those base_X references to find the stereotypes that have been applied to a UML object: return Class.all .selectOne(s|s.name = 'MyClass') .revRefNav_base_Class .collect(st|Model.getTypeOf(st)) .name; This would produce: [Special]","title":"Papyrus UML support"},{"location":"basic-use/papyrus/#supported-file-extensions","text":"Hawk indexes plain UML2 models with the .uml extension, and Papyrus profiles with the .profile.uml extension. It does not index .di nor .notation files at the moment, as these do not provide semantic information. .xmi files are not indexed by the Hawk UML components, to avoid conflicts with the plain EMF support (matching the file to the proper model resource is done strictly by file extension). You are recommended to rename your UML2 XMI files to .uml for now.","title":"Supported file extensions"},{"location":"basic-use/papyrus/#predefined-uml-packages","text":"UML2 provides an implementation of the UML standard libraries, with packages containing some common datatypes (e.g. String or Integer). If your models use any of these libraries, we heavily recommend that you add a PredefinedUMLLibraries component in your \"Indexed Locations\" section. Otherwise, any references from your models to the libraries will be left unresolved, and you will not be able to use those predefined entities in your queries. This is because Hawk operates normally on files, and the predefined UML libraries are generally bundled within the UML2 plugins. The PredefinedUMLLibraries exposes those bundled resources to Hawk in a way that is transparent to the querying language.","title":"Predefined UML packages"},{"location":"basic-use/papyrus/#multi-version-papyrus-uml-profile-support","text":"Beyond registering all the metamodels required to index plain UML models, the UML metamodel resource factory in Hawk can register .profile.uml files as metamodels. This allows us to index UML models with custom profiles in Hawk. Since UML profiles can be versioned, Hawk will register version X.Y.Z of profile with URI http://your/profile with http://your/profile/X.Y.Z as the URI. When querying with Hawk, you will have to specify http://your/profile/X.Y.Z in your default namespaces, in order to resolve the ambiguity that may exist between multiple versions of the same metamodel. If a new version of the UML profile is created, you will need to register the .profile.uml file again with Hawk before it can index models that use that version of the profile. Hawk treats entities of different versions of the same profile as entirely different types. In terms of implementation details, Hawk takes advantage of the fact that .profile.uml files contain a collection of Ecore EPackages . Hawk simply adds the /X.Y.Z version suffix to their namespace URI, and otherwise leaves them untouched.","title":"Multi-version Papyrus UML profile support"},{"location":"basic-use/papyrus/#example-using-hawk-to-index-all-uml-models-in-an-eclipse-workspace","text":"We will show how Hawk can be used to index all the UML models in an Eclipse workspace, including those that have custom profiles applied to them. To illustrate our approach, we will use these toy models created with Papyrus. We assume that you have installed Hawk into your Eclipse instance, following the steps in [[this wiki page|Installation]].","title":"Example: using Hawk to index all UML models in an Eclipse workspace"},{"location":"basic-use/papyrus/#models","text":"The model is a very simple UML class diagram: It only has two classes, one of which has the <<Special>> stereotype with a priority property equal to 23. This value is not shown in the diagram, but it can be checked from the \"Profile\" page of the \"Properties\" view when the class is selected. The profile including the <<Special>> stereotype is also very simple: The diagram imports the Class UML metaclass, and then extends it with the <<Special>> stereotype.","title":"Models"},{"location":"basic-use/papyrus/#creating-the-hawk-index","text":"Before we can run any queries, we need to create a Hawk index. If we have installed Hawk correctly, we will be able to open the \"Hawk\" view and see something like this: Right now, we have no indexes in Hawk. We need to press the \"Add\" button, which is highlighted in red above. We should see a dialog similar to this: Important points: We can pick any name we want, as long as it is unique. Instance type should be a LocalHawkFactory if we intend to index our workspace. The Local storage folder will contain some of the configuration of that Hawk instance, and the database. Remote location is irrelevant when using the LocalHawkFactory . If we are only interested in indexing the UML models in the workspace, it is a good idea to Disable all the plugins and then check only the UML metamodel and model resource factories. You can choose to use Neo4j (if you [[build it on your own|Running from source]]), OrientDB, or any other backend we may support in the future. Min/Max Delay indicate how often will Hawk poll all the indexed locations. If you are only indexing the current workspace, you can leave both at 0 to disable polling: regardless of this setting, Hawk will react automatically whenever something in the workspace changes. Once the index has been created, you should see an entry for it in the \"Hawk\" view:","title":"Creating the Hawk index"},{"location":"basic-use/papyrus/#adding-metamodels-and-models","text":"From the screenshot above, we know that the index is RUNNING (available for queries) and not UPDATING nor STOPPED , so we can start configuring it as we need. First, we should double click on it to open the configuration dialog: We should go to the \"Metamodels\" tab and click on \"Add...\", then select the specialThings.profile/model.profile.uml file. Hawk will register our custom profile as a metamodel, and we will be ready to index models using all the versions of this profile so far. Should we define any newer versions, we will have to add the file again to Hawk. The dialog will now list the new metamodel: Now we are ready to add the locations where the models to be indexed are stored. We go to the \"Indexed Locations\" tab and click on \"Add\". First, we will add the predefined UML libraries with some commonly used instances (e.g. UML data types): We need to pick the right \"Type\", and then enter / in the \"Location\" field. The location is ignored for this repository, but due to current limitations in the UI we have to enter something in the field. Next, we have to tell Hawk to index all the models in the workspace. We will \"Add\" another location, and this time fill the dialog like this: Again, the / \"Location\" is irrelevant but required by the UI. Hawk will spend some time UPDATING , and once it is RUNNING again we will be ready to run some queries on it.","title":"Adding metamodels and models"},{"location":"basic-use/papyrus/#querying-hawk","text":"We can finally query Hawk now. To do so, we need to select our index on the \"Hawk\" view and click on the \"Query\" button, which looks like a magnifying glass: We will see a dialog like this one, with all fields empty: Enter the query return Class.all.name; and click on the \"Run Query\" button. This query lists the names of all the classes indexed so far by Hawk. You will notice that we obtain these results: [E, T, MyClass, Special, V, NotSoSpecial, Stereotype1, K, E] The E/T/V/K/E classes came from the predefined UML libraries. If you want only the results from your workspace, you must tell Hawk through the \"Context Repositories\" field, by entering platform:/resource . This is the base URI used by Hawk to identify all the files in your workspace. Click on \"Run Query\" again, and you should obtain the results shown in the screenshot: [MyClass, Stereotype1, Special, NotSoSpecial] Note how the query also returns the classes in the profile. Should you want to avoid this, you can either use the \"Context Files\" field ( *model.uml will do this) to further restrict the scope of the query.","title":"Querying Hawk"},{"location":"basic-use/papyrus/#finding-uml-objects-by-stereotype","text":"If you would like to find all applications of stereotype X , you can simply use X.all and then use base_Metaclass to find the object that was annotated with that stereotype. For instance, this query will find the name of all the classes that had the <<Special>> stereotype applied to them: return Special.all.base_Class.name; You will get: [MyClass] You can also access stereotype properties: return Special.all.collect(s| Sequence { s.priority, s.base_Class.name } ).asSequence; This will produce: [[23, MyClass]]","title":"Finding UML objects by stereotype"},{"location":"basic-use/papyrus/#finding-stereotype-applications-from-the-uml-object","text":"If you want to go the other way around, you can use reverse reference navigation on those base_X references to find the stereotypes that have been applied to a UML object: return Class.all .selectOne(s|s.name = 'MyClass') .revRefNav_base_Class .collect(st|Model.getTypeOf(st)) .name; This would produce: [Special]","title":"Finding stereotype applications from the UML object"},{"location":"developers/plain-maven/","text":"Hawk can be reused as a library in a regular Java application, outside OSGi. Non-OSGi developers normally use Maven or a Maven-compatible build system (e.g. Ivy or SBT), rather than Tycho. To make it easier for these developers, Hawk provides a parallel hierarchy of pom-plain.xml files that can be used to build Hawk with plain Maven ( pom.xml files are reserved for Tycho). Not all Hawk modules are available through this build, as they may rely on OSGi (e.g. org.eclipse.hawk.modelio ) or require downloading many external dependencies (e.g. org.eclipse.hawk.bpmn ). .feature , .dependencies and .tests projects are not included either, as they are OSGi-specific. For that reason, this build should only be used for distribution, and not for regular development. To build with regular Maven, run mvn -f pom-plain.xml install from the root of the repository to compile the artifacts and install them into the local Maven repository, so they can be used in other Maven builds.","title":"Build with plain Maven"},{"location":"developers/run-from-source/","text":"These instructions are from a clean download of an Eclipse Luna Modelling distribution and include all optional dependencies. Clone this Git repository on your Eclipse instance (e.g. using git clone or EGit) and import all projects into the workspace (File > Import > Existing Projects into Workspace). Open the org.hawk.targetplatform/org.hawk.targetplatform.target file, wait for the target definition to be resolved and click on Set as Target Platform . Install IvyDE into your Eclipse instance, right click on org.hawk.neo4j-v2.dependencies and use \"Ivy > Retrieve 'dependencies'\". The libraries should appear within Referenced Libraries . Do the same for these other projects: org.hawk.orientdb org.hawk.localfolder org.hawk.greycat Force a full rebuild with Project > Clean... > Clean all projects if you still have errors. After all these steps, you should have a working version of Hawk with all optional dependencies and no errors. You can now use \"Run as... > Eclipse Application\" to open a nested Eclipse with the Hawk GUI running inside it.","title":"Run GUI from source"},{"location":"developers/server-from-source/","text":"In order to run the server products from the sources, you need to first install the basic steps for running Hawk from source. Once you have done that, to run the server product, you should open the relevant .product file. The editor will look like this one: You should use one of the buttons highlighted in red (the triangle \"Run\" button or the bug-like \"Debug\" button) to run the product for the first time. It may fail, due to the slightly buggy way in which Eclipse produces the launch configuration from the product. If you see this: !ENTRY org.eclipse.osgi 4 0 2017-04-15 13:51:14.444 !MESSAGE Application error !STACK 1 java.lang.RuntimeException: No application id has been found. That means you need to tweak the launch configuration a bit. Shutdown the server by entering shutdown and then close in the \"Console\" view, and then open the \"Run\" menu and select \"Run Configurations...\". Select the relevant \"Eclipse Application\" launch configuration and go to the \"Plug-ins\" section: Click on \"Add Required Plugins\": you'll notice that it adds quite a few things. Click on \"Run\" now: it should work fine. Eventually, you should see this text: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. You are done! You can also use \"Debug\" to track bugs in the server itself. Note : if you would like to make changes to the Thrift API, you will need to edit the api.emf Emfatic file in the service.api project, and then regenerate the api.thrift file by using Ecore2Thrift . After that, you will need to run the Thrift code generator through the generate.sh script in the root of the same project.","title":"Run Server from source"},{"location":"developers/website/","text":"The website for Eclipse Hawk is written in MkDocs . The website repository is available here: https://git.eclipse.org/c/www.eclipse.org/hawk.git/ To work on the website, clone it with your Eclipse credentials, and follow the instructions in the included README.md file.","title":"Work on the website"},{"location":"server/api-security/","text":"In some cases, we may want to protect the API from unaccounted use, as clients would have access to potentially sensitive information. In order to provide this access control, the Apache Shiro library has been integrated transparently as a filter for all incoming requests to the endpoints under /thrift . /thrift-local endpoints are not password-protected, as they only answer requests from other processes in the machine hosting the MONDO Server. Apache Shiro protects these /thrift endpoints using standard HTTP Basic authentication, which is transparent to Thrift, avoiding the need to pollute the web API with access tokens in every single method. Industrial partners will be instructed to always use the authentication layer in combination with SSL, since HTTP Basic by itself is insecure. One important advantage of Shiro is its configurability through a single .ini file, like this one: [main] # Objects and their properties are defined here, # Such as the securityManager, Realms and anything # else needed to build the SecurityManager # Note: this should be set to true in production! ssl.enabled = true # Toggle to enable/disable authentication completely authcBasic.enabled = true # Use Hawk realm mondoRealm = uk.ac.york.mondo.integration.server.users.servlet.shiro.UsersRealm securityManager.realms = $mondoRealm # We\u2019re using SHA\u2212512 for passwords, with 10k iterations credentialsMatcher = org.apache.shiro.authc.credential.Sha512CredentialsMatcher credentialsMatcher.hashIterations = 10000 mondoRealm.credentialsMatcher = $credentialsMatcher [urls] /thrift/** = ssl, authcBasic Shiro is heavily componentized, making it easy to provide alternative implementations of certain pieces and reuse the default implementations for the rest. In the shown example, all requests to the /thrift endpoints go through the default ssl and authcBasic filters: when enabled, these filters enforce the use of SSL and HTTP Basic authentication respectively. Both filters should be enabled in production environments. For the HTTP Basic authentication, the server provides its own implementation of a Shiro security realm, which is dedicated to storing and retrieving user details. The security realm uses an embedded MapDB database to persist these user details, which are managed through the Users service (Section 5.2.4). An embedded database was used in order to prevent end users from having to set up a database just to store a small set of users. MapDB is distributed as a single .jar file, making it very simple to integrate. In any case, the realm could be replaced with another one if desired by editing shiro.ini on an installation. Passwords for the MONDO realm are stored in a hashed and salted form, using 10000 iterations of SHA-512 and a random per-password salt. As for the client side, the command-line based clients accept optional arguments for the required credentials when connecting to the Thrift endpoints. If the password is omitted, the command-line based clients will require it in a separate \"silent\" prompt that does not show the characters that are typed, preventing shoulder surfing attacks. Due to limitations in the Eclipse graphical user interface, these silent prompts are only available when running the command-line based clients from a proper terminal window and not from the Eclipse \"Console\" view. The graphical clients connect to the Thrift endpoints using \u201clazy\u201d credential providers: if authentication is required, they will attempt to retrieve previously used credentials from the Eclipse secure store and if no such credentials exist, they will show an authentication dialog asking for the username and password to be used. The Eclipse secure storage takes advantage of the access control and encryption capabilities of the underlying operating system as much as possible, and makes it possible to store passwords safely and conveniently. These stored MONDO server credentials can be managed from the \"Hawk Servers\" preference page. Regarding the Artemis messaging queue, it has been secured with the same Shiro realm as the Thrift endpoints. The remote Hawk EMF abstraction (the only component that uses Artemis within the MONDO platform) will connect to Artemis with the same credentials that were used to connect to Thrift, if authentication was required.","title":"Thrift API security"},{"location":"server/api/","text":"Services \u00b6 Hawk \u00b6 The following service operations expose the capabilities of the Hawk heterogeneous model indexing framework. Hawk.createInstance \u00b6 Creates a new Hawk instance (stopped). Returns void . Takes these parameters: Name Type Documentation name string The unique name of the new Hawk instance. backend string The name of the backend to be used, as returned by listBackends(). minimumDelayMillis i32 Minimum delay between periodic synchronization in milliseconds. maximumDelayMillis i32 Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization). enabledPlugins (optional) list List of plugins to be enabled: if not set, all plugins are enabled. Hawk.listBackends \u00b6 Lists the names of the available storage backends. Returns list<string> . Does not take any parameters. Hawk.listPlugins \u00b6 Lists all the Hawk plugins that can be enabled or disabled: metamodel parsers, model parsers and graph change listeners. Returns list<string> . Does not take any parameters. Hawk.listInstances \u00b6 Lists the details of all Hawk instances. Returns list<HawkInstance> . Does not take any parameters. Hawk.removeInstance \u00b6 Removes an existing Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to remove. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. Hawk.startInstance \u00b6 Starts a stopped Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to start. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. Hawk.stopInstance \u00b6 Stops a running Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.syncInstance \u00b6 Forces an immediate synchronization on a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. blockUntilDone (optional) bool If true, blocks the call until the synchronisation completes. False by default. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.registerMetamodels \u00b6 Registers a set of file-based metamodels with a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The metamodels to register. More than one metamodel file can be provided in one request, to accomodate fragmented metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. InvalidMetamodel The provided metamodel is not valid (e.g. unparsable or inconsistent). HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.unregisterMetamodels \u00b6 Unregisters a metamodel from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The URIs of the metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listMetamodels \u00b6 Lists the URIs of the registered metamodels of a Hawk instance. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listQueryLanguages \u00b6 Lists the supported query languages and their status. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. Hawk.query \u00b6 Runs a query on a Hawk instance and returns a sequence of scalar values and/or model elements. Returns QueryResult . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. query string The query to be executed. language string The name of the query language used (e.g. EOL, OCL). options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownQueryLanguage The specified query language is not supported by the operation. InvalidQuery The specified query is not valid. FailedQuery The specified query failed to complete its execution. Hawk.resolveProxies \u00b6 Returns populated model elements for the provided proxies. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. ids list Proxy model element IDs to be resolved. options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.addRepository \u00b6 Asks a Hawk instance to start monitoring a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repo Repository The repository to monitor. credentials (optional) Credentials A valid set of credentials that has read-access to the repository. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownRepositoryType The specified repository type is not supported by the operation. VCSAuthenticationFailed The client failed to prove its identity in the VCS. Hawk.isFrozen \u00b6 Returns true if a repository is frozen, false otherwise. Returns bool . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.setFrozen \u00b6 Changes the 'frozen' state of a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to be changed. isFrozen bool Whether the repository should be frozen (true) or not (false). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.removeRepository \u00b6 Asks a Hawk instance to stop monitoring a repository and remove its elements from the graph. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to stop monitoring. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.updateRepositoryCredentials \u00b6 Changes the credentials used to monitor a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to update. cred Credentials The new credentials to be used. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listRepositories \u00b6 Lists the repositories monitored by a Hawk instance. Returns list<Repository> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listRepositoryTypes \u00b6 Lists the available repository types in this installation. Returns list<string> . Does not take any parameters. Hawk.listFiles \u00b6 Lists the paths of the files of the indexed repository. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repository list The URI of the indexed repository. filePatterns list File name patterns to search for (* lists all files). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.configurePolling \u00b6 Sets the base polling period and max interval of a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. base i32 The base polling period (in seconds). max i32 The maximum polling interval (in seconds). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidPollingConfiguration The polling configuration is not valid. Hawk.addDerivedAttribute \u00b6 Add a new derived attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the new derived attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidDerivedAttributeSpec The derived attribute specification is not valid. Hawk.removeDerivedAttribute \u00b6 Remove a derived attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the derived attribute to be removed. Only the first three fields of the spec need to be populated. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listDerivedAttributes \u00b6 Lists the derived attributes of a Hawk instance. Only the first three fields of the spec are currently populated. Returns list<DerivedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.addIndexedAttribute \u00b6 Add a new indexed attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the new indexed attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidIndexedAttributeSpec The indexed attribute specification is not valid. Hawk.removeIndexedAttribute \u00b6 Remove a indexed attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the indexed attribute to be removed. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listIndexedAttributes \u00b6 Lists the indexed attributes of a Hawk instance. Returns list<IndexedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.getModel \u00b6 Returns the contents of one or more models indexed in a Hawk instance. Cross-model references are also resolved, and contained objects are always sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.getRootElements \u00b6 Returns the root objects of one or more models indexed in a Hawk instance. Node IDs are always sent, and contained objects are never sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. Hawk.watchStateChanges \u00b6 Returns subscription details to a queue of HawkStateEvents with notifications about changes in the indexer's state. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.watchModelChanges \u00b6 Returns subscription details to a queue of HawkChangeEvents with notifications about changes to a set of indexed models. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repositoryUri string The URI of the repository in which the model is contained. filePath list The pattern(s) for the model file(s) in the repository. clientID string Unique client ID (used as suffix for the queue name). durableEvents SubscriptionDurability Durability of the subscription. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. IFCExport \u00b6 IFC export facility for getting IFC models from the Hawk server. IFCExport.exportAsSTEP \u00b6 Export part of a Hawk index in IFC STEP format. Returns IFCExportJob . Takes these parameters: Name Type Documentation hawkInstance string options IFCExportOptions IFCExport.getJobs \u00b6 List all the previously scheduled IFC export jobs. Returns list<IFCExportJob> . Does not take any parameters. IFCExport.getJobStatus \u00b6 Retrieve the current status of the job with the specified ID. Returns IFCExportJob . Takes these parameters: Name Type Documentation jobID string IFCExport.killJob \u00b6 Cancel the job with the specified ID. Returns bool . Takes these parameters: Name Type Documentation jobID string Users \u00b6 The majority of service operations provided by the server require user authentication (indicated in the top-left cell of each operation table) to prevent unaccountable use. As such, the platform needs to provide basic user management service operations for creating, updating and deleting user accounts. When handling passwords, only SSL should be used, as otherwise they could be intercepted. Users.createUser \u00b6 Creates a new platform user. Returns void . Takes these parameters: Name Type Documentation username string A unique identifier for the user. password string The desired password. profile UserProfile The profile of the user. May throw these exceptions: Name Documentation UserExists The specified username already exists. Users.updateProfile \u00b6 Updates the profile of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. profile UserProfile The updated profile of the user. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Users.updatePassword \u00b6 Updates the password of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. newPassword string New password to be set. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Users.deleteUser \u00b6 Deletes a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to delete. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Entities \u00b6 AttributeSlot \u00b6 Represents a slot that can store the value(s) of an attribute of a model element. Inherits from: Slot. Name Type Documentation name (inherited) string The name of the model element property the value of which is stored in this slot. value (optional) SlotValue Value of the slot (if set). Used in: ModelElement. CommitItem \u00b6 Simplified entry within a commit of a repository. Name Type Documentation path string Path within the repository, using / as separator. repoURL string URL of the repository. revision string Unique identifier of the revision of the repository. type CommitItemChangeType Type of change within the commit. Used in: HawkModelElementAdditionEvent, HawkModelElementRemovalEvent, HawkAttributeUpdateEvent, HawkAttributeRemovalEvent, HawkReferenceAdditionEvent, HawkReferenceRemovalEvent, HawkFileAdditionEvent, HawkFileRemovalEvent. ContainerSlot \u00b6 Represents a slot that can store other model elements within a model element. Inherits from: Slot. Name Type Documentation elements list Contained elements for this slot. name (inherited) string The name of the model element property the value of which is stored in this slot. Used in: ModelElement. Credentials \u00b6 Credentials of the client in the target VCS. Name Type Documentation password string Password for logging into the VCS. username string Username for logging into the VCS. Used in: Hawk.addRepository, Hawk.updateRepositoryCredentials. DerivedAttributeSpec \u00b6 Used to configure Hawk's derived attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the derived attribute. attributeType (optional) string The (primitive) type of the derived attribute. derivationLanguage (optional) string The language used to express the derivation logic. derivationLogic (optional) string An executable expression of the derivation logic in the language above. isMany (optional) bool The multiplicity of the derived attribute. isOrdered (optional) bool A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true). isUnique (optional) bool A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true). metamodelUri string The URI of the metamodel to which the derived attribute belongs. typeName string The name of the type to which the derived attribute belongs. Used in: Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes. EffectiveMetamodel \u00b6 Representation of a set of rules for either including or excluding certain types and/or slots within a metamodel. Name Type Documentation slots set Slots within the type that should be included or excluded: empty means 'all slots'. type string Type that should be included or excluded. Used in: EffectiveMetamodelMap. EffectiveMetamodelMap \u00b6 Representation of a set of rules for either including or excluding metamodels, types or slots. Name Type Documentation metamodel map > Types and slots within the metamodel that should be included or excluded: empty means 'all types and slots'. uri string Namespace URI of the metamodel. Used in: HawkQueryOptions, IFCExportOptions. File \u00b6 A file to be sent through the network. Name Type Documentation contents binary Sequence of bytes with the contents of the file. name string Name of the file. Used in: Hawk.registerMetamodels. HawkAttributeRemovalEvent \u00b6 Serialized form of an attribute removal event. Name Type Documentation attribute string Name of the attribute that was removed. id string Identifier of the model element that was changed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkAttributeUpdateEvent \u00b6 Serialized form of an attribute update event. Name Type Documentation attribute string Name of the attribute that was changed. id string Identifier of the model element that was changed. value SlotValue New value for the attribute. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkChangeEvent \u00b6 Serialized form of a change in the indexed models of a Hawk instance. Name Type Documentation fileAddition HawkFileAdditionEvent A file was added. fileRemoval HawkFileRemovalEvent A file was removed. modelElementAddition HawkModelElementAdditionEvent A model element was added. modelElementAttributeRemoval HawkAttributeRemovalEvent An attribute was removed. modelElementAttributeUpdate HawkAttributeUpdateEvent An attribute was updated. modelElementRemoval HawkModelElementRemovalEvent A model element was removed. referenceAddition HawkReferenceAdditionEvent A reference was added. referenceRemoval HawkReferenceRemovalEvent A reference was removed. syncEnd HawkSynchronizationEndEvent Synchronization ended. syncStart HawkSynchronizationStartEvent Synchronization started. HawkFileAdditionEvent \u00b6 Serialized form of a file addition event. Name Type Documentation vcsItem CommitItem Reference to file that was added, including VCS metadata. Used in: HawkChangeEvent. HawkFileRemovalEvent \u00b6 A file was removed. Name Type Documentation vcsItem CommitItem Reference to file that was removed, including VCS metadata. Used in: HawkChangeEvent. HawkInstance \u00b6 Status of a Hawk instance. Name Type Documentation message string Last info message from the instance. name string The name of the instance. state HawkState Current state of the instance. Used in: Hawk.listInstances. HawkModelElementAdditionEvent \u00b6 Serialized form of a model element addition event. Name Type Documentation id string Identifier of the model element that was added. metamodelURI string Metamodel URI of the type of the model element. typeName string Name of the type of the model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkModelElementRemovalEvent \u00b6 Serialized form of a model element removal event. Name Type Documentation id string Identifier of the model element that was removed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkQueryOptions \u00b6 Options for a Hawk query. Name Type Documentation defaultNamespaces (optional) string The default namespaces to be used to resolve ambiguous unqualified types. effectiveMetamodelExcludes (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. effectiveMetamodelIncludes (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeAttributes (optional) bool Whether to include attributes (true) or not (false) in model element results. includeContained (optional) bool Whether to include all the child elements of the model element results (true) or not (false). includeDerived (optional) bool Whether to include derived attributes (true) or not (false) in model element results. includeNodeIDs (optional) bool Whether to include node IDs (true) or not (false) in model element results. includeReferences (optional) bool Whether to include references (true) or not (false) in model element results. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: Hawk.query, Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements. HawkReferenceAdditionEvent \u00b6 Serialized form of a reference addition event. Name Type Documentation refName string Name of the reference that was added. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkReferenceRemovalEvent \u00b6 Serialized form of a reference removal event. Name Type Documentation refName string Name of the reference that was removed. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkStateEvent \u00b6 Serialized form of a change in the state of a Hawk instance. Name Type Documentation message string Short message about the current status of the server. state HawkState State of the Hawk instance. timestamp i64 Timestamp for this state change. HawkSynchronizationEndEvent \u00b6 Serialized form of a sync end event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent. HawkSynchronizationStartEvent \u00b6 Serialized form of a sync start event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent. IFCExportJob \u00b6 Information about a server-side IFC export job. Name Type Documentation jobID string message string status IFCExportStatus Used in: IFCExport.exportAsSTEP, IFCExport.getJobs, IFCExport.getJobStatus. IFCExportOptions \u00b6 Options for a server-side IFC export. Name Type Documentation excludeRules (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeRules (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: IFCExport.exportAsSTEP. IndexedAttributeSpec \u00b6 Used to configure Hawk's indexed attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the indexed attribute. metamodelUri string The URI of the metamodel to which the indexed attribute belongs. typeName string The name of the type to which the indexed attribute belongs. Used in: Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes. InvalidModelSpec \u00b6 The model specification is not valid: the model or the metamodels are inaccessible or invalid. Name Type Documentation reason string Reason for the spec not being valid. spec ModelSpec A copy of the invalid model specification. InvalidTransformation \u00b6 The transformation is not valid: it is unparsable or inconsistent. Name Type Documentation location string Location of the problem, if applicable. Usually a combination of line and column numbers. reason string Reason for the transformation not being valid. MixedReference \u00b6 Represents a reference to a model element: it can be an identifier or a position. Only used when the same ReferenceSlot has both identifier-based and position-based references. This may be the case if we are retrieving a subset of the model which has references between its elements and with elements outside the subset at the same time. Name Type Documentation id string Identifier-based reference to a model element. position i32 Position-based reference to a model element. Used in: ReferenceSlot. ModelElement \u00b6 Represents a model element. Name Type Documentation attributes (optional) list Slots holding the values of the model element's attributes, if any have been set. containers (optional) list Slots holding contained model elements, if any have been set. file (optional) string Name of the file to which the element belongs (not set if equal to that of the previous model element). id (optional) string Unique ID of the model element (not set if using position-based references). metamodelUri (optional) string URI of the metamodel to which the type of the element belongs (not set if equal to that of the previous model element). references (optional) list Slots holding the values of the model element's references, if any have been set. repositoryURL (optional) string URI of the repository to which the element belongs (not set if equal to that of the previous model element). typeName (optional) string Name of the type that the model element is an instance of (not set if equal to that of the previous model element). Used in: Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements, ContainerSlot, QueryResult. ModelElementType \u00b6 Represents a type of model element. Name Type Documentation attributes (optional) list Metadata for the attribute slots. id string Unique ID of the model element type. metamodelUri string URI of the metamodel to which the type belongs. references (optional) list Metadata for the reference slots. typeName string Name of the type. Used in: QueryResult. ModelSpec \u00b6 Captures information about source/target models of ATL transformations. Name Type Documentation metamodelUris list The URIs of the metamodels to which elements of the model conform. uri string The URI from which the model will be loaded or to which it will be persisted. Used in: InvalidModelSpec. QueryResult \u00b6 Union type for a scalar value, a reference to a model element, a heterogeneous list or a string/value map. Query results may return all types of results, so we need to stay flexible. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vByte (inherited) byte 8-bit signed integer value. vDouble (inherited) double 64-bit floating point value. vInteger (inherited) i32 32-bit signed integer value. vList list Nested list of query results. vLong (inherited) i64 64-bit signed integer value. vMap map Map between query results. vModelElement ModelElement Encoded model element. vModelElementType ModelElementType Encoded model element type. vShort (inherited) i16 16-bit signed integer value. vString (inherited) string Sequence of UTF8 characters. Used in: Hawk.query, QueryResult, QueryResultMap. QueryResultMap \u00b6 Name Type Documentation name string value QueryResult Used in: QueryResult. ReferenceSlot \u00b6 Represents a slot that can store the value(s) of a reference of a model element. References can be expressed as positions within a result tree (using pre-order traversal) or IDs. id, ids, position, positions and mixed are all mutually exclusive. At least one position or one ID must be given. Inherits from: Slot. Name Type Documentation id (optional) string Unique identifier of the referenced element (if there is only one ID based reference in this slot). ids (optional) list Unique identifiers of the referenced elements (if more than one). mixed (optional) list Mix of identifier- and position-bsaed references (if there is at least one position and one ID). name (inherited) string The name of the model element property the value of which is stored in this slot. position (optional) i32 Position of the referenced element (if there is only one position-based reference in this slot). positions (optional) list Positions of the referenced elements (if more than one). Used in: ModelElement. Repository \u00b6 Entity that represents a model repository. Name Type Documentation isFrozen (optional) bool True if the repository is frozen, false otherwise. type string The type of repository. uri string The URI to the repository. Used in: Hawk.addRepository, Hawk.listRepositories. Slot \u00b6 Represents a slot that can store the value(s) of a property of a model element. Inherited by: AttributeSlot, ReferenceSlot, ContainerSlot. Name Type Documentation name string The name of the model element property the value of which is stored in this slot. SlotMetadata \u00b6 Represents the metadata of a slot in a model element type. Name Type Documentation isMany bool True if this slot holds a collection of values instead of a single value. isOrdered bool True if the values in this slot are ordered. isUnique bool True if the value of this slot must be unique within its containing model. name string The name of the model element property that is stored in this slot. type string The type of the values in this slot. Used in: ModelElementType. SlotValue \u00b6 Union type for a single scalar value or a homogeneous collection of scalar values. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vBooleans list List of true/false values. vByte (inherited) byte 8-bit signed integer value. vBytes binary List of 8-bit signed integers. vDouble (inherited) double 64-bit floating point value. vDoubles list List of 64-bit floating point values. vInteger (inherited) i32 32-bit signed integer value. vIntegers list List of 32-bit signed integers. vLong (inherited) i64 64-bit signed integer value. vLongs list List of 64-bit signed integers. vShort (inherited) i16 16-bit signed integer value. vShorts list List of 16-bit signed integers. vString (inherited) string Sequence of UTF8 characters. vStrings list List of sequences of UTF8 characters. Used in: HawkAttributeUpdateEvent, AttributeSlot. Subscription \u00b6 Details about a subscription to a topic queue. Name Type Documentation host string Host name of the message queue server. port i32 Port in which the message queue server is listening. queueAddress string Address of the topic queue. queueName string Name of the topic queue. sslRequired bool Whether SSL is required or not. Used in: Hawk.watchStateChanges, Hawk.watchModelChanges. UserProfile \u00b6 Minimal details about registered users. Name Type Documentation admin bool Whether the user has admin rights (i.e. so that they can create new users, change the status of admin users etc). realName string The real name of the user. Used in: Users.createUser, Users.updateProfile. Value \u00b6 Union type for a single scalar value. Inherited by: QueryResult, SlotValue. Name Type Documentation vBoolean bool Boolean (true/false) value. vByte byte 8-bit signed integer value. vDouble double 64-bit floating point value. vInteger i32 32-bit signed integer value. vLong i64 64-bit signed integer value. vShort i16 16-bit signed integer value. vString string Sequence of UTF8 characters. Enumerations \u00b6 CommitItemChangeType \u00b6 Type of change within a commit. Name Documentation ADDED File was added. DELETED File was removed. REPLACED File was removed. UNKNOWN Unknown type of change. UPDATED File was updated. HawkState \u00b6 One of the states that a Hawk instance can be in. Name Documentation RUNNING The instance is running and monitoring the indexed locations. STOPPED The instance is stopped and is not monitoring any indexed locations. UPDATING The instance is updating its contents from the indexed locations. IFCExportStatus \u00b6 Status of a server-side IFC export job. Name Documentation CANCELLED The job has been cancelled. DONE The job is completed. FAILED The job has failed. RUNNING The job is currently running. SCHEDULED The job has been scheduled but has not started yet. SubscriptionDurability \u00b6 Durability of a subscription. Name Documentation DEFAULT Subscription survives client disconnections but not server restarts. DURABLE Subscription survives client disconnections and server restarts. TEMPORARY Subscription removed after disconnecting. ## Exceptions FailedQuery \u00b6 The specified query failed to complete its execution. Name Type Documentation reason string Reason for the query failing to complete its execution. Used in: Hawk.query. HawkInstanceNotFound \u00b6 No Hawk instance exists with that name. No fields for this entity. Used in: Hawk.removeInstance, Hawk.startInstance, Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges. HawkInstanceNotRunning \u00b6 The selected Hawk instance is not running. No fields for this entity. Used in: Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges. InvalidDerivedAttributeSpec \u00b6 The derived attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addDerivedAttribute. InvalidIndexedAttributeSpec \u00b6 The indexed attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addIndexedAttribute. InvalidMetamodel \u00b6 The provided metamodel is not valid (e.g. unparsable or inconsistent). Name Type Documentation reason string Reason for the metamodel not being valid. Used in: Hawk.registerMetamodels. InvalidPollingConfiguration \u00b6 The polling configuration is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.configurePolling. InvalidQuery \u00b6 The specified query is not valid. Name Type Documentation reason string Reason for the query not being valid. Used in: Hawk.query. UnknownQueryLanguage \u00b6 The specified query language is not supported by the operation. No fields for this entity. Used in: Hawk.query. UnknownRepositoryType \u00b6 The specified repository type is not supported by the operation. No fields for this entity. Used in: Hawk.addRepository. UserExists \u00b6 The specified username already exists. No fields for this entity. Used in: Users.createUser. UserNotFound \u00b6 The specified username does not exist. No fields for this entity. Used in: Users.updateProfile, Users.updatePassword, Users.deleteUser. VCSAuthenticationFailed \u00b6 The client failed to prove its identity in the VCS. No fields for this entity. Used in: Hawk.addRepository. This file was automatically generated by Ecore2Thrift. https://github.com/bluezio/ecore2thrift","title":"Thrift API"},{"location":"server/api/#services","text":"","title":"Services"},{"location":"server/api/#hawk","text":"The following service operations expose the capabilities of the Hawk heterogeneous model indexing framework.","title":"Hawk"},{"location":"server/api/#hawkcreateinstance","text":"Creates a new Hawk instance (stopped). Returns void . Takes these parameters: Name Type Documentation name string The unique name of the new Hawk instance. backend string The name of the backend to be used, as returned by listBackends(). minimumDelayMillis i32 Minimum delay between periodic synchronization in milliseconds. maximumDelayMillis i32 Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization). enabledPlugins (optional) list List of plugins to be enabled: if not set, all plugins are enabled.","title":"Hawk.createInstance"},{"location":"server/api/#hawklistbackends","text":"Lists the names of the available storage backends. Returns list<string> . Does not take any parameters.","title":"Hawk.listBackends"},{"location":"server/api/#hawklistplugins","text":"Lists all the Hawk plugins that can be enabled or disabled: metamodel parsers, model parsers and graph change listeners. Returns list<string> . Does not take any parameters.","title":"Hawk.listPlugins"},{"location":"server/api/#hawklistinstances","text":"Lists the details of all Hawk instances. Returns list<HawkInstance> . Does not take any parameters.","title":"Hawk.listInstances"},{"location":"server/api/#hawkremoveinstance","text":"Removes an existing Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to remove. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name.","title":"Hawk.removeInstance"},{"location":"server/api/#hawkstartinstance","text":"Starts a stopped Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to start. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name.","title":"Hawk.startInstance"},{"location":"server/api/#hawkstopinstance","text":"Stops a running Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.stopInstance"},{"location":"server/api/#hawksyncinstance","text":"Forces an immediate synchronization on a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. blockUntilDone (optional) bool If true, blocks the call until the synchronisation completes. False by default. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.syncInstance"},{"location":"server/api/#hawkregistermetamodels","text":"Registers a set of file-based metamodels with a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The metamodels to register. More than one metamodel file can be provided in one request, to accomodate fragmented metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. InvalidMetamodel The provided metamodel is not valid (e.g. unparsable or inconsistent). HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.registerMetamodels"},{"location":"server/api/#hawkunregistermetamodels","text":"Unregisters a metamodel from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The URIs of the metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.unregisterMetamodels"},{"location":"server/api/#hawklistmetamodels","text":"Lists the URIs of the registered metamodels of a Hawk instance. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listMetamodels"},{"location":"server/api/#hawklistquerylanguages","text":"Lists the supported query languages and their status. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance.","title":"Hawk.listQueryLanguages"},{"location":"server/api/#hawkquery","text":"Runs a query on a Hawk instance and returns a sequence of scalar values and/or model elements. Returns QueryResult . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. query string The query to be executed. language string The name of the query language used (e.g. EOL, OCL). options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownQueryLanguage The specified query language is not supported by the operation. InvalidQuery The specified query is not valid. FailedQuery The specified query failed to complete its execution.","title":"Hawk.query"},{"location":"server/api/#hawkresolveproxies","text":"Returns populated model elements for the provided proxies. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. ids list Proxy model element IDs to be resolved. options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.resolveProxies"},{"location":"server/api/#hawkaddrepository","text":"Asks a Hawk instance to start monitoring a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repo Repository The repository to monitor. credentials (optional) Credentials A valid set of credentials that has read-access to the repository. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownRepositoryType The specified repository type is not supported by the operation. VCSAuthenticationFailed The client failed to prove its identity in the VCS.","title":"Hawk.addRepository"},{"location":"server/api/#hawkisfrozen","text":"Returns true if a repository is frozen, false otherwise. Returns bool . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.isFrozen"},{"location":"server/api/#hawksetfrozen","text":"Changes the 'frozen' state of a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to be changed. isFrozen bool Whether the repository should be frozen (true) or not (false). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.setFrozen"},{"location":"server/api/#hawkremoverepository","text":"Asks a Hawk instance to stop monitoring a repository and remove its elements from the graph. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to stop monitoring. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeRepository"},{"location":"server/api/#hawkupdaterepositorycredentials","text":"Changes the credentials used to monitor a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to update. cred Credentials The new credentials to be used. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.updateRepositoryCredentials"},{"location":"server/api/#hawklistrepositories","text":"Lists the repositories monitored by a Hawk instance. Returns list<Repository> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listRepositories"},{"location":"server/api/#hawklistrepositorytypes","text":"Lists the available repository types in this installation. Returns list<string> . Does not take any parameters.","title":"Hawk.listRepositoryTypes"},{"location":"server/api/#hawklistfiles","text":"Lists the paths of the files of the indexed repository. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repository list The URI of the indexed repository. filePatterns list File name patterns to search for (* lists all files). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listFiles"},{"location":"server/api/#hawkconfigurepolling","text":"Sets the base polling period and max interval of a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. base i32 The base polling period (in seconds). max i32 The maximum polling interval (in seconds). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidPollingConfiguration The polling configuration is not valid.","title":"Hawk.configurePolling"},{"location":"server/api/#hawkaddderivedattribute","text":"Add a new derived attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the new derived attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidDerivedAttributeSpec The derived attribute specification is not valid.","title":"Hawk.addDerivedAttribute"},{"location":"server/api/#hawkremovederivedattribute","text":"Remove a derived attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the derived attribute to be removed. Only the first three fields of the spec need to be populated. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeDerivedAttribute"},{"location":"server/api/#hawklistderivedattributes","text":"Lists the derived attributes of a Hawk instance. Only the first three fields of the spec are currently populated. Returns list<DerivedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listDerivedAttributes"},{"location":"server/api/#hawkaddindexedattribute","text":"Add a new indexed attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the new indexed attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidIndexedAttributeSpec The indexed attribute specification is not valid.","title":"Hawk.addIndexedAttribute"},{"location":"server/api/#hawkremoveindexedattribute","text":"Remove a indexed attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the indexed attribute to be removed. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeIndexedAttribute"},{"location":"server/api/#hawklistindexedattributes","text":"Lists the indexed attributes of a Hawk instance. Returns list<IndexedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listIndexedAttributes"},{"location":"server/api/#hawkgetmodel","text":"Returns the contents of one or more models indexed in a Hawk instance. Cross-model references are also resolved, and contained objects are always sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.getModel"},{"location":"server/api/#hawkgetrootelements","text":"Returns the root objects of one or more models indexed in a Hawk instance. Node IDs are always sent, and contained objects are never sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent.","title":"Hawk.getRootElements"},{"location":"server/api/#hawkwatchstatechanges","text":"Returns subscription details to a queue of HawkStateEvents with notifications about changes in the indexer's state. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.watchStateChanges"},{"location":"server/api/#hawkwatchmodelchanges","text":"Returns subscription details to a queue of HawkChangeEvents with notifications about changes to a set of indexed models. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repositoryUri string The URI of the repository in which the model is contained. filePath list The pattern(s) for the model file(s) in the repository. clientID string Unique client ID (used as suffix for the queue name). durableEvents SubscriptionDurability Durability of the subscription. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.watchModelChanges"},{"location":"server/api/#ifcexport","text":"IFC export facility for getting IFC models from the Hawk server.","title":"IFCExport"},{"location":"server/api/#ifcexportexportasstep","text":"Export part of a Hawk index in IFC STEP format. Returns IFCExportJob . Takes these parameters: Name Type Documentation hawkInstance string options IFCExportOptions","title":"IFCExport.exportAsSTEP"},{"location":"server/api/#ifcexportgetjobs","text":"List all the previously scheduled IFC export jobs. Returns list<IFCExportJob> . Does not take any parameters.","title":"IFCExport.getJobs"},{"location":"server/api/#ifcexportgetjobstatus","text":"Retrieve the current status of the job with the specified ID. Returns IFCExportJob . Takes these parameters: Name Type Documentation jobID string","title":"IFCExport.getJobStatus"},{"location":"server/api/#ifcexportkilljob","text":"Cancel the job with the specified ID. Returns bool . Takes these parameters: Name Type Documentation jobID string","title":"IFCExport.killJob"},{"location":"server/api/#users","text":"The majority of service operations provided by the server require user authentication (indicated in the top-left cell of each operation table) to prevent unaccountable use. As such, the platform needs to provide basic user management service operations for creating, updating and deleting user accounts. When handling passwords, only SSL should be used, as otherwise they could be intercepted.","title":"Users"},{"location":"server/api/#userscreateuser","text":"Creates a new platform user. Returns void . Takes these parameters: Name Type Documentation username string A unique identifier for the user. password string The desired password. profile UserProfile The profile of the user. May throw these exceptions: Name Documentation UserExists The specified username already exists.","title":"Users.createUser"},{"location":"server/api/#usersupdateprofile","text":"Updates the profile of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. profile UserProfile The updated profile of the user. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.updateProfile"},{"location":"server/api/#usersupdatepassword","text":"Updates the password of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. newPassword string New password to be set. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.updatePassword"},{"location":"server/api/#usersdeleteuser","text":"Deletes a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to delete. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.deleteUser"},{"location":"server/api/#entities","text":"","title":"Entities"},{"location":"server/api/#attributeslot","text":"Represents a slot that can store the value(s) of an attribute of a model element. Inherits from: Slot. Name Type Documentation name (inherited) string The name of the model element property the value of which is stored in this slot. value (optional) SlotValue Value of the slot (if set). Used in: ModelElement.","title":"AttributeSlot"},{"location":"server/api/#commititem","text":"Simplified entry within a commit of a repository. Name Type Documentation path string Path within the repository, using / as separator. repoURL string URL of the repository. revision string Unique identifier of the revision of the repository. type CommitItemChangeType Type of change within the commit. Used in: HawkModelElementAdditionEvent, HawkModelElementRemovalEvent, HawkAttributeUpdateEvent, HawkAttributeRemovalEvent, HawkReferenceAdditionEvent, HawkReferenceRemovalEvent, HawkFileAdditionEvent, HawkFileRemovalEvent.","title":"CommitItem"},{"location":"server/api/#containerslot","text":"Represents a slot that can store other model elements within a model element. Inherits from: Slot. Name Type Documentation elements list Contained elements for this slot. name (inherited) string The name of the model element property the value of which is stored in this slot. Used in: ModelElement.","title":"ContainerSlot"},{"location":"server/api/#credentials","text":"Credentials of the client in the target VCS. Name Type Documentation password string Password for logging into the VCS. username string Username for logging into the VCS. Used in: Hawk.addRepository, Hawk.updateRepositoryCredentials.","title":"Credentials"},{"location":"server/api/#derivedattributespec","text":"Used to configure Hawk's derived attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the derived attribute. attributeType (optional) string The (primitive) type of the derived attribute. derivationLanguage (optional) string The language used to express the derivation logic. derivationLogic (optional) string An executable expression of the derivation logic in the language above. isMany (optional) bool The multiplicity of the derived attribute. isOrdered (optional) bool A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true). isUnique (optional) bool A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true). metamodelUri string The URI of the metamodel to which the derived attribute belongs. typeName string The name of the type to which the derived attribute belongs. Used in: Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes.","title":"DerivedAttributeSpec"},{"location":"server/api/#effectivemetamodel","text":"Representation of a set of rules for either including or excluding certain types and/or slots within a metamodel. Name Type Documentation slots set Slots within the type that should be included or excluded: empty means 'all slots'. type string Type that should be included or excluded. Used in: EffectiveMetamodelMap.","title":"EffectiveMetamodel"},{"location":"server/api/#effectivemetamodelmap","text":"Representation of a set of rules for either including or excluding metamodels, types or slots. Name Type Documentation metamodel map > Types and slots within the metamodel that should be included or excluded: empty means 'all types and slots'. uri string Namespace URI of the metamodel. Used in: HawkQueryOptions, IFCExportOptions.","title":"EffectiveMetamodelMap"},{"location":"server/api/#file","text":"A file to be sent through the network. Name Type Documentation contents binary Sequence of bytes with the contents of the file. name string Name of the file. Used in: Hawk.registerMetamodels.","title":"File"},{"location":"server/api/#hawkattributeremovalevent","text":"Serialized form of an attribute removal event. Name Type Documentation attribute string Name of the attribute that was removed. id string Identifier of the model element that was changed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkAttributeRemovalEvent"},{"location":"server/api/#hawkattributeupdateevent","text":"Serialized form of an attribute update event. Name Type Documentation attribute string Name of the attribute that was changed. id string Identifier of the model element that was changed. value SlotValue New value for the attribute. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkAttributeUpdateEvent"},{"location":"server/api/#hawkchangeevent","text":"Serialized form of a change in the indexed models of a Hawk instance. Name Type Documentation fileAddition HawkFileAdditionEvent A file was added. fileRemoval HawkFileRemovalEvent A file was removed. modelElementAddition HawkModelElementAdditionEvent A model element was added. modelElementAttributeRemoval HawkAttributeRemovalEvent An attribute was removed. modelElementAttributeUpdate HawkAttributeUpdateEvent An attribute was updated. modelElementRemoval HawkModelElementRemovalEvent A model element was removed. referenceAddition HawkReferenceAdditionEvent A reference was added. referenceRemoval HawkReferenceRemovalEvent A reference was removed. syncEnd HawkSynchronizationEndEvent Synchronization ended. syncStart HawkSynchronizationStartEvent Synchronization started.","title":"HawkChangeEvent"},{"location":"server/api/#hawkfileadditionevent","text":"Serialized form of a file addition event. Name Type Documentation vcsItem CommitItem Reference to file that was added, including VCS metadata. Used in: HawkChangeEvent.","title":"HawkFileAdditionEvent"},{"location":"server/api/#hawkfileremovalevent","text":"A file was removed. Name Type Documentation vcsItem CommitItem Reference to file that was removed, including VCS metadata. Used in: HawkChangeEvent.","title":"HawkFileRemovalEvent"},{"location":"server/api/#hawkinstance","text":"Status of a Hawk instance. Name Type Documentation message string Last info message from the instance. name string The name of the instance. state HawkState Current state of the instance. Used in: Hawk.listInstances.","title":"HawkInstance"},{"location":"server/api/#hawkmodelelementadditionevent","text":"Serialized form of a model element addition event. Name Type Documentation id string Identifier of the model element that was added. metamodelURI string Metamodel URI of the type of the model element. typeName string Name of the type of the model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkModelElementAdditionEvent"},{"location":"server/api/#hawkmodelelementremovalevent","text":"Serialized form of a model element removal event. Name Type Documentation id string Identifier of the model element that was removed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkModelElementRemovalEvent"},{"location":"server/api/#hawkqueryoptions","text":"Options for a Hawk query. Name Type Documentation defaultNamespaces (optional) string The default namespaces to be used to resolve ambiguous unqualified types. effectiveMetamodelExcludes (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. effectiveMetamodelIncludes (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeAttributes (optional) bool Whether to include attributes (true) or not (false) in model element results. includeContained (optional) bool Whether to include all the child elements of the model element results (true) or not (false). includeDerived (optional) bool Whether to include derived attributes (true) or not (false) in model element results. includeNodeIDs (optional) bool Whether to include node IDs (true) or not (false) in model element results. includeReferences (optional) bool Whether to include references (true) or not (false) in model element results. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: Hawk.query, Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements.","title":"HawkQueryOptions"},{"location":"server/api/#hawkreferenceadditionevent","text":"Serialized form of a reference addition event. Name Type Documentation refName string Name of the reference that was added. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkReferenceAdditionEvent"},{"location":"server/api/#hawkreferenceremovalevent","text":"Serialized form of a reference removal event. Name Type Documentation refName string Name of the reference that was removed. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkReferenceRemovalEvent"},{"location":"server/api/#hawkstateevent","text":"Serialized form of a change in the state of a Hawk instance. Name Type Documentation message string Short message about the current status of the server. state HawkState State of the Hawk instance. timestamp i64 Timestamp for this state change.","title":"HawkStateEvent"},{"location":"server/api/#hawksynchronizationendevent","text":"Serialized form of a sync end event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent.","title":"HawkSynchronizationEndEvent"},{"location":"server/api/#hawksynchronizationstartevent","text":"Serialized form of a sync start event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent.","title":"HawkSynchronizationStartEvent"},{"location":"server/api/#ifcexportjob","text":"Information about a server-side IFC export job. Name Type Documentation jobID string message string status IFCExportStatus Used in: IFCExport.exportAsSTEP, IFCExport.getJobs, IFCExport.getJobStatus.","title":"IFCExportJob"},{"location":"server/api/#ifcexportoptions","text":"Options for a server-side IFC export. Name Type Documentation excludeRules (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeRules (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: IFCExport.exportAsSTEP.","title":"IFCExportOptions"},{"location":"server/api/#indexedattributespec","text":"Used to configure Hawk's indexed attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the indexed attribute. metamodelUri string The URI of the metamodel to which the indexed attribute belongs. typeName string The name of the type to which the indexed attribute belongs. Used in: Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes.","title":"IndexedAttributeSpec"},{"location":"server/api/#invalidmodelspec","text":"The model specification is not valid: the model or the metamodels are inaccessible or invalid. Name Type Documentation reason string Reason for the spec not being valid. spec ModelSpec A copy of the invalid model specification.","title":"InvalidModelSpec"},{"location":"server/api/#invalidtransformation","text":"The transformation is not valid: it is unparsable or inconsistent. Name Type Documentation location string Location of the problem, if applicable. Usually a combination of line and column numbers. reason string Reason for the transformation not being valid.","title":"InvalidTransformation"},{"location":"server/api/#mixedreference","text":"Represents a reference to a model element: it can be an identifier or a position. Only used when the same ReferenceSlot has both identifier-based and position-based references. This may be the case if we are retrieving a subset of the model which has references between its elements and with elements outside the subset at the same time. Name Type Documentation id string Identifier-based reference to a model element. position i32 Position-based reference to a model element. Used in: ReferenceSlot.","title":"MixedReference"},{"location":"server/api/#modelelement","text":"Represents a model element. Name Type Documentation attributes (optional) list Slots holding the values of the model element's attributes, if any have been set. containers (optional) list Slots holding contained model elements, if any have been set. file (optional) string Name of the file to which the element belongs (not set if equal to that of the previous model element). id (optional) string Unique ID of the model element (not set if using position-based references). metamodelUri (optional) string URI of the metamodel to which the type of the element belongs (not set if equal to that of the previous model element). references (optional) list Slots holding the values of the model element's references, if any have been set. repositoryURL (optional) string URI of the repository to which the element belongs (not set if equal to that of the previous model element). typeName (optional) string Name of the type that the model element is an instance of (not set if equal to that of the previous model element). Used in: Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements, ContainerSlot, QueryResult.","title":"ModelElement"},{"location":"server/api/#modelelementtype","text":"Represents a type of model element. Name Type Documentation attributes (optional) list Metadata for the attribute slots. id string Unique ID of the model element type. metamodelUri string URI of the metamodel to which the type belongs. references (optional) list Metadata for the reference slots. typeName string Name of the type. Used in: QueryResult.","title":"ModelElementType"},{"location":"server/api/#modelspec","text":"Captures information about source/target models of ATL transformations. Name Type Documentation metamodelUris list The URIs of the metamodels to which elements of the model conform. uri string The URI from which the model will be loaded or to which it will be persisted. Used in: InvalidModelSpec.","title":"ModelSpec"},{"location":"server/api/#queryresult","text":"Union type for a scalar value, a reference to a model element, a heterogeneous list or a string/value map. Query results may return all types of results, so we need to stay flexible. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vByte (inherited) byte 8-bit signed integer value. vDouble (inherited) double 64-bit floating point value. vInteger (inherited) i32 32-bit signed integer value. vList list Nested list of query results. vLong (inherited) i64 64-bit signed integer value. vMap map Map between query results. vModelElement ModelElement Encoded model element. vModelElementType ModelElementType Encoded model element type. vShort (inherited) i16 16-bit signed integer value. vString (inherited) string Sequence of UTF8 characters. Used in: Hawk.query, QueryResult, QueryResultMap.","title":"QueryResult"},{"location":"server/api/#queryresultmap","text":"Name Type Documentation name string value QueryResult Used in: QueryResult.","title":"QueryResultMap"},{"location":"server/api/#referenceslot","text":"Represents a slot that can store the value(s) of a reference of a model element. References can be expressed as positions within a result tree (using pre-order traversal) or IDs. id, ids, position, positions and mixed are all mutually exclusive. At least one position or one ID must be given. Inherits from: Slot. Name Type Documentation id (optional) string Unique identifier of the referenced element (if there is only one ID based reference in this slot). ids (optional) list Unique identifiers of the referenced elements (if more than one). mixed (optional) list Mix of identifier- and position-bsaed references (if there is at least one position and one ID). name (inherited) string The name of the model element property the value of which is stored in this slot. position (optional) i32 Position of the referenced element (if there is only one position-based reference in this slot). positions (optional) list Positions of the referenced elements (if more than one). Used in: ModelElement.","title":"ReferenceSlot"},{"location":"server/api/#repository","text":"Entity that represents a model repository. Name Type Documentation isFrozen (optional) bool True if the repository is frozen, false otherwise. type string The type of repository. uri string The URI to the repository. Used in: Hawk.addRepository, Hawk.listRepositories.","title":"Repository"},{"location":"server/api/#slot","text":"Represents a slot that can store the value(s) of a property of a model element. Inherited by: AttributeSlot, ReferenceSlot, ContainerSlot. Name Type Documentation name string The name of the model element property the value of which is stored in this slot.","title":"Slot"},{"location":"server/api/#slotmetadata","text":"Represents the metadata of a slot in a model element type. Name Type Documentation isMany bool True if this slot holds a collection of values instead of a single value. isOrdered bool True if the values in this slot are ordered. isUnique bool True if the value of this slot must be unique within its containing model. name string The name of the model element property that is stored in this slot. type string The type of the values in this slot. Used in: ModelElementType.","title":"SlotMetadata"},{"location":"server/api/#slotvalue","text":"Union type for a single scalar value or a homogeneous collection of scalar values. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vBooleans list List of true/false values. vByte (inherited) byte 8-bit signed integer value. vBytes binary List of 8-bit signed integers. vDouble (inherited) double 64-bit floating point value. vDoubles list List of 64-bit floating point values. vInteger (inherited) i32 32-bit signed integer value. vIntegers list List of 32-bit signed integers. vLong (inherited) i64 64-bit signed integer value. vLongs list List of 64-bit signed integers. vShort (inherited) i16 16-bit signed integer value. vShorts list List of 16-bit signed integers. vString (inherited) string Sequence of UTF8 characters. vStrings list List of sequences of UTF8 characters. Used in: HawkAttributeUpdateEvent, AttributeSlot.","title":"SlotValue"},{"location":"server/api/#subscription","text":"Details about a subscription to a topic queue. Name Type Documentation host string Host name of the message queue server. port i32 Port in which the message queue server is listening. queueAddress string Address of the topic queue. queueName string Name of the topic queue. sslRequired bool Whether SSL is required or not. Used in: Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"Subscription"},{"location":"server/api/#userprofile","text":"Minimal details about registered users. Name Type Documentation admin bool Whether the user has admin rights (i.e. so that they can create new users, change the status of admin users etc). realName string The real name of the user. Used in: Users.createUser, Users.updateProfile.","title":"UserProfile"},{"location":"server/api/#value","text":"Union type for a single scalar value. Inherited by: QueryResult, SlotValue. Name Type Documentation vBoolean bool Boolean (true/false) value. vByte byte 8-bit signed integer value. vDouble double 64-bit floating point value. vInteger i32 32-bit signed integer value. vLong i64 64-bit signed integer value. vShort i16 16-bit signed integer value. vString string Sequence of UTF8 characters.","title":"Value"},{"location":"server/api/#enumerations","text":"","title":"Enumerations"},{"location":"server/api/#commititemchangetype","text":"Type of change within a commit. Name Documentation ADDED File was added. DELETED File was removed. REPLACED File was removed. UNKNOWN Unknown type of change. UPDATED File was updated.","title":"CommitItemChangeType"},{"location":"server/api/#hawkstate","text":"One of the states that a Hawk instance can be in. Name Documentation RUNNING The instance is running and monitoring the indexed locations. STOPPED The instance is stopped and is not monitoring any indexed locations. UPDATING The instance is updating its contents from the indexed locations.","title":"HawkState"},{"location":"server/api/#ifcexportstatus","text":"Status of a server-side IFC export job. Name Documentation CANCELLED The job has been cancelled. DONE The job is completed. FAILED The job has failed. RUNNING The job is currently running. SCHEDULED The job has been scheduled but has not started yet.","title":"IFCExportStatus"},{"location":"server/api/#subscriptiondurability","text":"Durability of a subscription. Name Documentation DEFAULT Subscription survives client disconnections but not server restarts. DURABLE Subscription survives client disconnections and server restarts. TEMPORARY Subscription removed after disconnecting. ## Exceptions","title":"SubscriptionDurability"},{"location":"server/api/#failedquery","text":"The specified query failed to complete its execution. Name Type Documentation reason string Reason for the query failing to complete its execution. Used in: Hawk.query.","title":"FailedQuery"},{"location":"server/api/#hawkinstancenotfound","text":"No Hawk instance exists with that name. No fields for this entity. Used in: Hawk.removeInstance, Hawk.startInstance, Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"HawkInstanceNotFound"},{"location":"server/api/#hawkinstancenotrunning","text":"The selected Hawk instance is not running. No fields for this entity. Used in: Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"HawkInstanceNotRunning"},{"location":"server/api/#invalidderivedattributespec","text":"The derived attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addDerivedAttribute.","title":"InvalidDerivedAttributeSpec"},{"location":"server/api/#invalidindexedattributespec","text":"The indexed attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addIndexedAttribute.","title":"InvalidIndexedAttributeSpec"},{"location":"server/api/#invalidmetamodel","text":"The provided metamodel is not valid (e.g. unparsable or inconsistent). Name Type Documentation reason string Reason for the metamodel not being valid. Used in: Hawk.registerMetamodels.","title":"InvalidMetamodel"},{"location":"server/api/#invalidpollingconfiguration","text":"The polling configuration is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.configurePolling.","title":"InvalidPollingConfiguration"},{"location":"server/api/#invalidquery","text":"The specified query is not valid. Name Type Documentation reason string Reason for the query not being valid. Used in: Hawk.query.","title":"InvalidQuery"},{"location":"server/api/#unknownquerylanguage","text":"The specified query language is not supported by the operation. No fields for this entity. Used in: Hawk.query.","title":"UnknownQueryLanguage"},{"location":"server/api/#unknownrepositorytype","text":"The specified repository type is not supported by the operation. No fields for this entity. Used in: Hawk.addRepository.","title":"UnknownRepositoryType"},{"location":"server/api/#userexists","text":"The specified username already exists. No fields for this entity. Used in: Users.createUser.","title":"UserExists"},{"location":"server/api/#usernotfound","text":"The specified username does not exist. No fields for this entity. Used in: Users.updateProfile, Users.updatePassword, Users.deleteUser.","title":"UserNotFound"},{"location":"server/api/#vcsauthenticationfailed","text":"The client failed to prove its identity in the VCS. No fields for this entity. Used in: Hawk.addRepository. This file was automatically generated by Ecore2Thrift. https://github.com/bluezio/ecore2thrift","title":"VCSAuthenticationFailed"},{"location":"server/architecture/","text":"If an entire team is querying the same set of models, indexing them from a central location is more efficient than maintaining multiple indexes. In other cases, we may want to query models from outside Eclipse and even from applications written in other languages (e.g. C++ or Python). To support these use cases, Hawk includes a server that exposes its functionality through a set of Thrift APIs. This server product is a headless Eclipse application that can be run from the command line. The general structure is as shown here: The server component is implemented as an Eclipse application, based on the Eclipse Equinox OSGi runtime. Using Eclipse Equinox for the server allows for integrating the Eclipse-based tools with very few changes in their code, while reducing the chances of mutual interference. The OSGi class loading mechanisms ensure that each plugin only \"sees\" the classes that it declares as dependencies, avoiding common clashes such as requiring different versions of the same Java library or overriding a configuration file with an unexpected copy from another library. To mitigate the risk of connectivity problems due to enterprise firewalls, the server uses for most of the API the standard HTTP and HTTPS protocols (by default, on the unprivileged ports 8080 and 8443) and secures them through Apache Shiro . Optionally, the Hawk API can be exposed through raw TCP on port 2080, for increased performance: however, security-conscious environments should leave it disabled as it does not support authentication. The embedded Apache Artemis messaging queue required for remote change notifications in Hawk requires its own port, as it manages its own network connections. By default, this is port 61616. These notifications are made available through two protocols: Artemis Core (a lightweight replacement for the Java Message Service, for Java clients) and STOMP over WebSockets (a cross-language messaging protocol, for web-based clients). The server includes plugins that use the standard OSGi HttpService facilities to register servlets and filters. Each service is implemented as one or more of these servlets. The currently implemented endpoints are these: Path within server Service Thrift protocol /thrift/hawk/binary Hawk Binary /thrift/hawk/compact Hawk Compact /thrift/hawk/json Hawk JSON /thrift/hawk/tuple Hawk Tuple /thrift/users Users JSON All services provide a JSON endpoint, since it is compatible across all languages supported by Thrift and works well with web-based clients. However, since Hawk is performance sensitive (as we might need to encode a large number of model elements in the results of a query), it also provides endpoints with the other Thrift protocols. Binary is the most portable after JSON, and Tuple is the most efficient but is only usable from Java clients. Having all four protocols allows Hawk clients to pick the most efficient protocol that is available for their language. The available operations for the Users and Hawk APIs are listed in Thrift API . For details about the optional access control to these APIs, check Thrift API security .","title":"Architecture"},{"location":"server/cli/","text":"You can talk to a Hawk server from one of the console client products in the latest release . Using the product only requires unpacking the product and running the main executable within it. Alternatively, you could install the \"Hawk CLI Feature\" into your Eclipse instance and use these commands from the \"Host OSGi Console\" in the Console view. Each Thrift API has its own set of commands. Hawk \u00b6 You can use the hawkHelp command to list all the available commands. Connecting to Hawk \u00b6 Name Description hawkConnect <url> [username] [password] Connects to a Thrift endpoint (guesses the protocol from the URL) hawkDisconnect Disconnects from the current Thrift endpoint Managing Hawk indexer instances \u00b6 Name Description hawkAddInstance <name> <backend> [minDelay] [maxDelay|0] Adds an instance with the provided name (if maxDelay = 0, periodic updates are disabled) hawkListBackends Lists the available Hawk backends hawkListInstances Lists the available Hawk instances hawkRemoveInstance <name> Removes an instance with the provided name, if it exists hawkSelectInstance <name> Selects the instance with the provided name hawkStartInstance <name> Starts the instance with the provided name hawkStopInstance <name> Stops the instance with the provided name hawkSyncInstance <name> [waitForSync:true|false] Requests an immediate sync on the instance with the provided name Managing metamodels \u00b6 Name Description hawkListMetamodels Lists all registered metamodels in this instance hawkRegisterMetamodel <files...> Registers one or more metamodels hawkUnregisterMetamodel <uri> Unregisters the metamodel with the specified URI Managing version control repositories \u00b6 Name Description hawkAddRepository <url> <type> [user] [pwd] Adds a repository hawkListFiles <url> [filepatterns...] Lists files within a repository hawkListRepositories Lists all registered metamodels in this instance hawkListRepositoryTypes Lists available repository types hawkRemoveRepository <url> Removes the repository with the specified URL hawkUpdateRepositoryCredentials <url> <user> <pwd> Changes the user/password used to monitor a repository Querying models \u00b6 Name Description hawkGetModel <repo> [filepatterns...] Returns all the model elements of the specified files within the repo hawkGetRoots <repo> [filepatterns...] Returns only the root model elements of the specified files within the repo hawkListQueryLanguages Lists all available query languages hawkQuery <query> <language> [repo] [files] Queries the index hawkResolveProxies <ids...> Retrieves model elements by ID Managing derived attributes \u00b6 Name Description hawkAddDerivedAttribute <mmURI> <mmType> <name> <type> <lang> <expr> [many|ordered|unique]* Adds a derived attribute hawkListDerivedAttributes Lists all available derived attributes hawkRemoveDerivedAttribute <mmURI> <mmType> <name> Removes a derived attribute, if it exists Managing indexed attributes \u00b6 Name Description hawkAddIndexedAttribute <mmURI> <mmType> <name> Adds an indexed attribute hawkListIndexedAttributes Lists all available indexed attributes hawkRemoveIndexedAttribute <mmURI> <mmType> <name> Removes an indexed attribute, if it exists Watching over changes in remote models \u00b6 Name Description hawkWatchModelChanges [default|temporary|durable] [client ID] [repo] [files...] Watches an Artemis message queue with detected model changes Users \u00b6 The Users API has its own set of commands, which can be listed through usersHelp : Name Description usersHelp Lists all the available commands for Users usersConnect <url> [username] [password] Connects to a Thrift endpoint usersDisconnect Disconnects from the current Thrift endpoint usersAdd <username> <realname> <isAdmin: true|false> [password] Adds the user to the database usersUpdateProfile <username> <realname> <isAdmin: true|false> Changes the personal information of a user usersUpdatePassword <username> [password] Changes the password of a user usersRemove <username> Removes a user usersCheck <username> [password] Validates credentials","title":"Console client"},{"location":"server/cli/#hawk","text":"You can use the hawkHelp command to list all the available commands.","title":"Hawk"},{"location":"server/cli/#connecting-to-hawk","text":"Name Description hawkConnect <url> [username] [password] Connects to a Thrift endpoint (guesses the protocol from the URL) hawkDisconnect Disconnects from the current Thrift endpoint","title":"Connecting to Hawk"},{"location":"server/cli/#managing-hawk-indexer-instances","text":"Name Description hawkAddInstance <name> <backend> [minDelay] [maxDelay|0] Adds an instance with the provided name (if maxDelay = 0, periodic updates are disabled) hawkListBackends Lists the available Hawk backends hawkListInstances Lists the available Hawk instances hawkRemoveInstance <name> Removes an instance with the provided name, if it exists hawkSelectInstance <name> Selects the instance with the provided name hawkStartInstance <name> Starts the instance with the provided name hawkStopInstance <name> Stops the instance with the provided name hawkSyncInstance <name> [waitForSync:true|false] Requests an immediate sync on the instance with the provided name","title":"Managing Hawk indexer instances"},{"location":"server/cli/#managing-metamodels","text":"Name Description hawkListMetamodels Lists all registered metamodels in this instance hawkRegisterMetamodel <files...> Registers one or more metamodels hawkUnregisterMetamodel <uri> Unregisters the metamodel with the specified URI","title":"Managing metamodels"},{"location":"server/cli/#managing-version-control-repositories","text":"Name Description hawkAddRepository <url> <type> [user] [pwd] Adds a repository hawkListFiles <url> [filepatterns...] Lists files within a repository hawkListRepositories Lists all registered metamodels in this instance hawkListRepositoryTypes Lists available repository types hawkRemoveRepository <url> Removes the repository with the specified URL hawkUpdateRepositoryCredentials <url> <user> <pwd> Changes the user/password used to monitor a repository","title":"Managing version control repositories"},{"location":"server/cli/#querying-models","text":"Name Description hawkGetModel <repo> [filepatterns...] Returns all the model elements of the specified files within the repo hawkGetRoots <repo> [filepatterns...] Returns only the root model elements of the specified files within the repo hawkListQueryLanguages Lists all available query languages hawkQuery <query> <language> [repo] [files] Queries the index hawkResolveProxies <ids...> Retrieves model elements by ID","title":"Querying models"},{"location":"server/cli/#managing-derived-attributes","text":"Name Description hawkAddDerivedAttribute <mmURI> <mmType> <name> <type> <lang> <expr> [many|ordered|unique]* Adds a derived attribute hawkListDerivedAttributes Lists all available derived attributes hawkRemoveDerivedAttribute <mmURI> <mmType> <name> Removes a derived attribute, if it exists","title":"Managing derived attributes"},{"location":"server/cli/#managing-indexed-attributes","text":"Name Description hawkAddIndexedAttribute <mmURI> <mmType> <name> Adds an indexed attribute hawkListIndexedAttributes Lists all available indexed attributes hawkRemoveIndexedAttribute <mmURI> <mmType> <name> Removes an indexed attribute, if it exists","title":"Managing indexed attributes"},{"location":"server/cli/#watching-over-changes-in-remote-models","text":"Name Description hawkWatchModelChanges [default|temporary|durable] [client ID] [repo] [files...] Watches an Artemis message queue with detected model changes","title":"Watching over changes in remote models"},{"location":"server/cli/#users","text":"The Users API has its own set of commands, which can be listed through usersHelp : Name Description usersHelp Lists all the available commands for Users usersConnect <url> [username] [password] Connects to a Thrift endpoint usersDisconnect Disconnects from the current Thrift endpoint usersAdd <username> <realname> <isAdmin: true|false> [password] Adds the user to the database usersUpdateProfile <username> <realname> <isAdmin: true|false> Changes the personal information of a user usersUpdatePassword <username> [password] Changes the password of a user usersRemove <username> Removes a user usersCheck <username> [password] Validates credentials","title":"Users"},{"location":"server/deployment/","text":"Initial setup \u00b6 To run the Hawk server, download the latest hawk-server-*.zip file for your operating system and architecture of choice from the \"Releases\" section on Github , and unpack it. Note that -nogpl- releases do not include GPL-licensed components: if you want them in your server, you will have to build it yourself. Make any relevant changes to the mondo-server.ini file, and then run the run-server.sh script from Linux, or simply the provided mondo-server binary from Mac or Windows. If everything goes well, you should see this message: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. osgi> You may now use the Thrift APIs as normal. If you need to make any tweaks, continue reading! .ini options \u00b6 You will notice that the .ini file has quite a few different options defined, in addition to the JVM options defined with -vmargs . We will analyze them in this section. -console allows us to use the OSGi console to manage Hawk instances. -consoleLog plugs Eclipse logging into the console, for following what is going with the server. -Dartemis.security.enabled=false disables the Shiro security realm for the embedded Artemis server. Production environments should set this to true . -Dhawk.artemis.host=localhost has Artemis listening only on 127.0.0.1. You should change this to the IP address or hostname of the network interface that you want Artemis to listen on. Alternatively, you can have Artemis listening in all addresses (see -Dhawk.artemis.listenAll below). -Dhawk.artemis.port=61616 has Artemis listening on port 61616 in the CORE and STOMP protocols. -Dhawk.artemis.listenAll=false prevents Artemis from listening on all addresses. You can set this to true and ignore hawk.artemis.host . -Dhawk.artemis.sslEnabled=false disables HTTPS on Artemis. If you enable SSL, you will need to check the \"Enabling HTTPS\" section further below! -Dhawk.tcp.port=2080 enables the TCP server for only the Hawk API, and not the Users management one. This API is unsecured, so do this at your own risk. For production environments, you should remove this line. -Dhawk.tcp.thriftProtocol=TUPLE changes the Thrift protocol (encoding) that should be used for the TCP endpoint. -Dorg.eclipse.equinox.http.jetty.customizer.class=org.hawk.service.server.gzip.Customizer is needed for the * -Dorg.osgi.service.http.port=8080 sets the HTTP port for the APIs to 8080. -Dorg.osgi.service.http.port.secure=8443 sets the HTTPS port for the APIs to 8443. -Dosgi.noShutdown=true is needed for the server to stay running. -Dsvnkit.library.gnome-keyring.enabled=false is required to work around a bug in the integration of the GNOME keyring in recent Eclipse releases. -eclipse.keyring and -eclipse.password are the paths to the keyring and keyring password files which store the VCS credentials Hawk needs to access password-protected SVN repositories. (For Git repositories, you are assumed to keep your own clone and do any periodic pulling yourself.) -XX:+UseG1GC (part of -vmargs ) improves garbage collection in OrientDB and Neo4j. -XX:+UseStringDeduplication (part of -vmargs as well) noticeably reduces memory use in OrientDB. Ports \u00b6 These are the default ports that a Hawk server uses: 2080: Hawk raw TCP API, available by default (unsecured: see above for how to disable it) 8080: Hawk HTTP API, available by default (optionally secured: see below) 8443: Hawk HTTPS API, if enabled (optionally secured: see below) 61616: Artemis push notifications for Hawk index status updates (optionally secured / encrypted: see below) Concerns for production environments \u00b6 One important detail for production environments is turning on security. This is disabled by default to help with testing and initial evaluations, but it can be enabled by running the server once, shutting it down and then editing the shiro.ini file appropriately (relevant sections include comments on what to do) and switching artemis.security.enabled to true in the mondo-server.ini file. The MONDO server uses an embedded MapDB database, which is managed through the Users [[Thrift API]]. Once security is enabled, all Thrift APIs and all external (not in-VM) Artemis connections become password-protected. If you enable security, you will want to ensure that -Dhawk.tcp.port is not present in the mondo-server.ini file, since the Hawk TCP port does not support security for the sake of raw performance. If you are deploying this across a network, you will need to edit the mondo-server.ini file and customize the hawk.artemis.host line to the host that you want the Artemis server to listen to. This should be the IP address or hostname of the MONDO server in the network, normally. The Thrift API uses this hostname as well in its replies to the watchModelChanges operation in the Hawk API. Additionally, if the server IP is dynamic but has a consistent DNS name (e.g. an Amazon VM + a dynamic DNS provider), we recommend setting hawk.artemis.listenAll to true (so the Artemis server will keep listening on all interfaces, even if the IP address changes) and using the DNS name for hawk.artemis.host instead of a literal IP address. Finally, production environments should enable and enforce SSL as well, since plain HTTP is insecure. The Linux products include a shell script that generates simple self-signed key/trust stores and indicates which Java system properties should be set on the server and the client. Secure storage of VCS credentials \u00b6 The server hosts a copy of the Hawk model indexer, which may need to access remote Git and Subversion repositories. To access password-protected repositories, the server will need to store the proper credentials in a secure way that will not expose them to other users in the same machine. To achieve this goal, the MONDO server uses the Eclipse secure storage facilities to save the password in an encrypted form. Users need to prepare the secure storage by following these two steps: The secure store must be placed in a place no other program will try to access concurrently. This can be done by editing the mondo-server.ini server configuration file and adding this: -eclipse.keyring /path/to/keyringfile That path should be only readable by the user running the server, for added security. An encryption password must be set. For Windows and Mac, the available OS integration should be enough. For Linux environments, two lines have to be added at the beginning of the mondo-server.ini file, specifying the path to a password file with: -eclipse.password /path/to/passwordfile. On Linux, creating a password file from 100 bytes of random data that is only readable by the current user can be done with these commands: $ head -c 100 /dev/random | base64 > /path/to/password $ chmod 400 /path/to/password The server tests on startup that the secure store has been set properly, warning users if encryption is not available and urging them to revise their setup. Setting up SSL certificates for the server \u00b6 SSL is handled through standard Java keystore ( .jks ) files. To produce a keystore with some self-signed certificates, you could use the generate-ssl-certs.sh script included in the Linux distribution, or run these commands from other operating systems (replace CN, OU and so forth with the appropriate values): keytool -genkey -keystore mondo-server-keystore.jks -storepass secureexample -keypass secureexample -dname \"CN=localhost, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -keyalg RSA keytool -export -keystore mondo-server-keystore.jks -file mondo-jks.cer -storepass secureexample keytool -import -keystore mondo-client-truststore.jks -file mondo-jks.cer -storepass secureexample -keypass secureexample -noprompt Once you have your .jks, on the client .ini you'll need to set: -Djavax.net.ssl.trustStore=path/to/client-truststore.jks -Djavax.net.ssl.trustStorePassword=secureexample On the server .ini, you'll need to enable SSL and tell Jetty and Artemis about your KeyStore: -Dorg.eclipse.equinox.http.jetty.https.enabled=true -Dhawk.artemis.sslEnabled=true -Dorg.eclipse.equinox.http.jetty.ssl.keystore=path/to/server-keystore.jks -Djavax.net.ssl.keyStore=path/to/server-keystore.jks You'll be prompted for the key store password three times: two by Jetty and once by the Artemis server. If you don't want these prompts, you could use these properties, but using them is UNSAFE , as another user in the same machine could retrieve these passwords from your process manager: -Djavax.net.ssl.keyStorePassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.keypassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.password=secureexample","title":"Deployment"},{"location":"server/deployment/#initial-setup","text":"To run the Hawk server, download the latest hawk-server-*.zip file for your operating system and architecture of choice from the \"Releases\" section on Github , and unpack it. Note that -nogpl- releases do not include GPL-licensed components: if you want them in your server, you will have to build it yourself. Make any relevant changes to the mondo-server.ini file, and then run the run-server.sh script from Linux, or simply the provided mondo-server binary from Mac or Windows. If everything goes well, you should see this message: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. osgi> You may now use the Thrift APIs as normal. If you need to make any tweaks, continue reading!","title":"Initial setup"},{"location":"server/deployment/#ini-options","text":"You will notice that the .ini file has quite a few different options defined, in addition to the JVM options defined with -vmargs . We will analyze them in this section. -console allows us to use the OSGi console to manage Hawk instances. -consoleLog plugs Eclipse logging into the console, for following what is going with the server. -Dartemis.security.enabled=false disables the Shiro security realm for the embedded Artemis server. Production environments should set this to true . -Dhawk.artemis.host=localhost has Artemis listening only on 127.0.0.1. You should change this to the IP address or hostname of the network interface that you want Artemis to listen on. Alternatively, you can have Artemis listening in all addresses (see -Dhawk.artemis.listenAll below). -Dhawk.artemis.port=61616 has Artemis listening on port 61616 in the CORE and STOMP protocols. -Dhawk.artemis.listenAll=false prevents Artemis from listening on all addresses. You can set this to true and ignore hawk.artemis.host . -Dhawk.artemis.sslEnabled=false disables HTTPS on Artemis. If you enable SSL, you will need to check the \"Enabling HTTPS\" section further below! -Dhawk.tcp.port=2080 enables the TCP server for only the Hawk API, and not the Users management one. This API is unsecured, so do this at your own risk. For production environments, you should remove this line. -Dhawk.tcp.thriftProtocol=TUPLE changes the Thrift protocol (encoding) that should be used for the TCP endpoint. -Dorg.eclipse.equinox.http.jetty.customizer.class=org.hawk.service.server.gzip.Customizer is needed for the * -Dorg.osgi.service.http.port=8080 sets the HTTP port for the APIs to 8080. -Dorg.osgi.service.http.port.secure=8443 sets the HTTPS port for the APIs to 8443. -Dosgi.noShutdown=true is needed for the server to stay running. -Dsvnkit.library.gnome-keyring.enabled=false is required to work around a bug in the integration of the GNOME keyring in recent Eclipse releases. -eclipse.keyring and -eclipse.password are the paths to the keyring and keyring password files which store the VCS credentials Hawk needs to access password-protected SVN repositories. (For Git repositories, you are assumed to keep your own clone and do any periodic pulling yourself.) -XX:+UseG1GC (part of -vmargs ) improves garbage collection in OrientDB and Neo4j. -XX:+UseStringDeduplication (part of -vmargs as well) noticeably reduces memory use in OrientDB.","title":".ini options"},{"location":"server/deployment/#ports","text":"These are the default ports that a Hawk server uses: 2080: Hawk raw TCP API, available by default (unsecured: see above for how to disable it) 8080: Hawk HTTP API, available by default (optionally secured: see below) 8443: Hawk HTTPS API, if enabled (optionally secured: see below) 61616: Artemis push notifications for Hawk index status updates (optionally secured / encrypted: see below)","title":"Ports"},{"location":"server/deployment/#concerns-for-production-environments","text":"One important detail for production environments is turning on security. This is disabled by default to help with testing and initial evaluations, but it can be enabled by running the server once, shutting it down and then editing the shiro.ini file appropriately (relevant sections include comments on what to do) and switching artemis.security.enabled to true in the mondo-server.ini file. The MONDO server uses an embedded MapDB database, which is managed through the Users [[Thrift API]]. Once security is enabled, all Thrift APIs and all external (not in-VM) Artemis connections become password-protected. If you enable security, you will want to ensure that -Dhawk.tcp.port is not present in the mondo-server.ini file, since the Hawk TCP port does not support security for the sake of raw performance. If you are deploying this across a network, you will need to edit the mondo-server.ini file and customize the hawk.artemis.host line to the host that you want the Artemis server to listen to. This should be the IP address or hostname of the MONDO server in the network, normally. The Thrift API uses this hostname as well in its replies to the watchModelChanges operation in the Hawk API. Additionally, if the server IP is dynamic but has a consistent DNS name (e.g. an Amazon VM + a dynamic DNS provider), we recommend setting hawk.artemis.listenAll to true (so the Artemis server will keep listening on all interfaces, even if the IP address changes) and using the DNS name for hawk.artemis.host instead of a literal IP address. Finally, production environments should enable and enforce SSL as well, since plain HTTP is insecure. The Linux products include a shell script that generates simple self-signed key/trust stores and indicates which Java system properties should be set on the server and the client.","title":"Concerns for production environments"},{"location":"server/deployment/#secure-storage-of-vcs-credentials","text":"The server hosts a copy of the Hawk model indexer, which may need to access remote Git and Subversion repositories. To access password-protected repositories, the server will need to store the proper credentials in a secure way that will not expose them to other users in the same machine. To achieve this goal, the MONDO server uses the Eclipse secure storage facilities to save the password in an encrypted form. Users need to prepare the secure storage by following these two steps: The secure store must be placed in a place no other program will try to access concurrently. This can be done by editing the mondo-server.ini server configuration file and adding this: -eclipse.keyring /path/to/keyringfile That path should be only readable by the user running the server, for added security. An encryption password must be set. For Windows and Mac, the available OS integration should be enough. For Linux environments, two lines have to be added at the beginning of the mondo-server.ini file, specifying the path to a password file with: -eclipse.password /path/to/passwordfile. On Linux, creating a password file from 100 bytes of random data that is only readable by the current user can be done with these commands: $ head -c 100 /dev/random | base64 > /path/to/password $ chmod 400 /path/to/password The server tests on startup that the secure store has been set properly, warning users if encryption is not available and urging them to revise their setup.","title":"Secure storage of VCS credentials"},{"location":"server/deployment/#setting-up-ssl-certificates-for-the-server","text":"SSL is handled through standard Java keystore ( .jks ) files. To produce a keystore with some self-signed certificates, you could use the generate-ssl-certs.sh script included in the Linux distribution, or run these commands from other operating systems (replace CN, OU and so forth with the appropriate values): keytool -genkey -keystore mondo-server-keystore.jks -storepass secureexample -keypass secureexample -dname \"CN=localhost, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -keyalg RSA keytool -export -keystore mondo-server-keystore.jks -file mondo-jks.cer -storepass secureexample keytool -import -keystore mondo-client-truststore.jks -file mondo-jks.cer -storepass secureexample -keypass secureexample -noprompt Once you have your .jks, on the client .ini you'll need to set: -Djavax.net.ssl.trustStore=path/to/client-truststore.jks -Djavax.net.ssl.trustStorePassword=secureexample On the server .ini, you'll need to enable SSL and tell Jetty and Artemis about your KeyStore: -Dorg.eclipse.equinox.http.jetty.https.enabled=true -Dhawk.artemis.sslEnabled=true -Dorg.eclipse.equinox.http.jetty.ssl.keystore=path/to/server-keystore.jks -Djavax.net.ssl.keyStore=path/to/server-keystore.jks You'll be prompted for the key store password three times: two by Jetty and once by the Artemis server. If you don't want these prompts, you could use these properties, but using them is UNSAFE , as another user in the same machine could retrieve these passwords from your process manager: -Djavax.net.ssl.keyStorePassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.keypassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.password=secureexample","title":"Setting up SSL certificates for the server"},{"location":"server/eclipse/","text":"Hawk includes multiple optional features to integrate the Thrift APIs with regular Eclipse-based tooling: A custom Hawk instance type that operates over the Thrift API instead of locally. An EMF abstraction that allows for treating remote models as local ones. An editor for the .hawkmodel model access descriptors used by the above EMF resource abstraction. This page documents how these different features can be used. Managing remote Hawk indexers \u00b6 When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default LocalHawkFactory whose LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled. A factory can also be used to \"import\" instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this: The \"Thrift API integration for Hawk GUI\" feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to \"import\", the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API. The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide ). All these aspects are transparent to the user: the only difference is selecting the appropriate \"Instance type\" in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client. Editor for remote model access descriptors \u00b6 There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a .hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network. .hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model. To simplify the creation and maintenance of these .hawkmodel files, an Eclipse-based editor is provided in the \"Remote Hawk EMF Model UI Feature\". The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly. Main tab \u00b6 Here is a screenshot of the main tab: The main form-based tab is divided into three sections: The \"Instance\" section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. \"Instance name\" can be clicked to open a selection dialog with all the available instances. The \"Username\" and \"Password\" fields only need to be filled in if using the .hawkmodel file outside Eclipse. When using the .hawkmodel inside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed. The \"Contents\" section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded: By default, the entire index is retrieved (repository URL is '*', file pattern is '*' and no query is used). The \"Repository URL\", \"File pattern(s)\" and \"Query language\" labels can be clicked to open selection dialogs with the appropriate options. The default loading mode is \"GREEDY\" (send the entire contents of the model in one message), but various lazy loading modes are available. The contents of the index can be split over the different source files or not. While splitting by file is useful for browsing, some EMF-based tools may not be compatible with it. The \"Default namespaces\" field makes it possible to resolve ambiguous type names. For instance, both the IFC2x3 and the IFC4 metamodels have a type called IfcActor . Without this field, the query would need to specify which one of the two metamodels should be used on every reference to IfcActor , which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel. The \"Page size for initial load\" field can be set to a value other than 0, indicating that during the initial load of the model, its contents should not be sent in one response message, but rather divided into \"pages\" of a certain size. It was observed that a GREEDY loading mode with an adequate page size can be faster to load than a lazy loading mode, while still keeping server memory and bandwidth requirements under control. The \"Subscription\" section allows users to enable live updates in the opened model through the watchGraphChanges operation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided. Effective metamodel tab \u00b6 The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called \"slots\" by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types. The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the \"State\" column of the table, or through the buttons on the right of the table: \"Include all\" resets the entire table to the default state of implicitly including everything. \"Exclude all\" resets the entire table to excluding all metamodels. \"Exclude\" and \"Include\" only change the state of the currently selected element. \"Reset\" returns the currently selected element to the \"Default\" state. The effective metamodel is saved as part of the .hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows: A metamodel is included if it is \"Included\", or if it has the \"Default\" state and no metamodels are explicitly \"Included\". A type is included if it is not \"Excluded\" and its metamodel is included. A slot is included if it is not \"Excluded\" and its type is included.","title":"Eclipse client"},{"location":"server/eclipse/#managing-remote-hawk-indexers","text":"When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default LocalHawkFactory whose LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled. A factory can also be used to \"import\" instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this: The \"Thrift API integration for Hawk GUI\" feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to \"import\", the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API. The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide ). All these aspects are transparent to the user: the only difference is selecting the appropriate \"Instance type\" in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client.","title":"Managing remote Hawk indexers"},{"location":"server/eclipse/#editor-for-remote-model-access-descriptors","text":"There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a .hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network. .hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model. To simplify the creation and maintenance of these .hawkmodel files, an Eclipse-based editor is provided in the \"Remote Hawk EMF Model UI Feature\". The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly.","title":"Editor for remote model access descriptors"},{"location":"server/eclipse/#main-tab","text":"Here is a screenshot of the main tab: The main form-based tab is divided into three sections: The \"Instance\" section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. \"Instance name\" can be clicked to open a selection dialog with all the available instances. The \"Username\" and \"Password\" fields only need to be filled in if using the .hawkmodel file outside Eclipse. When using the .hawkmodel inside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed. The \"Contents\" section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded: By default, the entire index is retrieved (repository URL is '*', file pattern is '*' and no query is used). The \"Repository URL\", \"File pattern(s)\" and \"Query language\" labels can be clicked to open selection dialogs with the appropriate options. The default loading mode is \"GREEDY\" (send the entire contents of the model in one message), but various lazy loading modes are available. The contents of the index can be split over the different source files or not. While splitting by file is useful for browsing, some EMF-based tools may not be compatible with it. The \"Default namespaces\" field makes it possible to resolve ambiguous type names. For instance, both the IFC2x3 and the IFC4 metamodels have a type called IfcActor . Without this field, the query would need to specify which one of the two metamodels should be used on every reference to IfcActor , which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel. The \"Page size for initial load\" field can be set to a value other than 0, indicating that during the initial load of the model, its contents should not be sent in one response message, but rather divided into \"pages\" of a certain size. It was observed that a GREEDY loading mode with an adequate page size can be faster to load than a lazy loading mode, while still keeping server memory and bandwidth requirements under control. The \"Subscription\" section allows users to enable live updates in the opened model through the watchGraphChanges operation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided.","title":"Main tab"},{"location":"server/eclipse/#effective-metamodel-tab","text":"The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called \"slots\" by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types. The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the \"State\" column of the table, or through the buttons on the right of the table: \"Include all\" resets the entire table to the default state of implicitly including everything. \"Exclude all\" resets the entire table to excluding all metamodels. \"Exclude\" and \"Include\" only change the state of the currently selected element. \"Reset\" returns the currently selected element to the \"Default\" state. The effective metamodel is saved as part of the .hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows: A metamodel is included if it is \"Included\", or if it has the \"Default\" state and no metamodels are explicitly \"Included\". A type is included if it is not \"Excluded\" and its metamodel is included. A slot is included if it is not \"Excluded\" and its type is included.","title":"Effective metamodel tab"},{"location":"server/file-config/","text":"Hawk server includes an API to add Hawk instances that are used to index and query models. The configuration engine allows the server to create and configure Hawk instances as per user-created configuration files. The server should be ready to receive user queries upon startup without any interaction from user or clients. Upon startup, Hawk server reads and parses configuration files, and then it creates/updates hawk instances as per configuration files. NOTE: the Hawk server no longer writes to configuration files. If an instance configuration changes during operation, this configuration is persisted through the current HawkConfig mechanism. Configuration files will not overwrite any of the changed settings. The only exception is the polling min/max which will revert to config file settings if a server is restarted. Format \u00b6 Configuration files are XML files that define hawk instance name and its configuration. An XML schema can be found at HawkServerConfigurationSchema.xsd . A sample configuration file can be found at Sample Configuration File The XML should include the following elements: Table 1: List of XML elements in configuration file \u00b6 Element Name Parent Element Name multiplicity Value Description \u2018hawk\u2019 xml 1 None Root element \u2018delay\u2019 \u2018hawk\u2019 1 None Polling configuration \u2018plugins\u2019 \u2018hawk\u2019 0-1 None List of plugins (to be/that are) enabled \u2018plugin\u2019 \u2018plugins\u2019 0-* None Plugin name \u2018metamodels\u2019 \u2018hawk\u2019 0-1 None List of metamodels (to be/that are) registered \u2018metamodel\u2019 \u2018metamodels\u2019 0-* None Metamodel parameters \u2018repositories\u2019 \u2018hawk\u2019 0-1 None List of repositories (to be/that are) added \u2018repository\u2019 \u2018repositories\u2019 0-* None Repository parameters \u2018derivedAttributes\u2019 \u2018hawk\u2019 0-1 None List of derived attributes (to be/that are) added \u2018derivedAttribute\u2019 \u2018derivedAttributes\u2019 0-* None Derived attribute parameters \u2018derivation\u2019 \u2018derivedAttribute\u2019 0-1 None Derivation parameters \u2018logic\u2019 \u2018derivation\u2019 0-1 CDATA section An executable expression of the derivation logic in the language specified. \u2018indexedAttributes\u2019 \u2018hawk\u2019 0-1 None List of indexed attributes (to be/that are) added \u2018indexedAttribute\u2019 \u2018indexedAttributes\u2019 0-* None Indexed attribute parameters Table 2: \u2018hawk\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018hawk\u2019 \u2018name\u2019 Required String The unique name of the new Hawk instance \u2018backend\u2019 Required String The name of the backend to be used (e.g.org.hawk.orientdb.OrientDatabase, org.hawk.orientdb.RemoteOrientDatabase) Table 3: \u2018delay\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018delay\u2019 \u2018min\u2019 Required String Minimum delay between periodic synchronization in milliseconds \u2018max\u2019 Required String Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization) Table 4: \u2018plugin\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018plugin\u2019 \u2018name\u2019 Required String e.g. (org.hawk.modelio.exml.listeners.ModelioGraphChangeListener, org.hawk.modelio.exml.metamodel.ModelioMetaModelResourceFactory, org.hawk.modelio.exml.model.ModelioModelResourceFactory) Table 5: \u2018metamodel\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018metamodel\u2019 \u2018location\u2019 Optional String Location of metamodel file to be registered ~~\u2018uri\u2019~~ ~~Optional~~ ~~String~~ ~~Metamodel URI. This value is set automatically by server to list registered metamodels~~ Table 6: \u2018repository\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018repository\u2019 \u2018location\u2019 Required String Location of the repository \u2018type\u2019 Optional String The type of repository available repository types () \u2018user\u2019 Optional String Username for logging into the VCS \u2018pass\u2019 Optional String Password for logging into the VCS \u2018frozen\u2019 Optional String If the repository is frozen (true/false) Table 7: \u2018derivedAttribute\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018derivedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the derived attribute \u2018typeName\u2019 Required String The name of the type to which the derived attribute belongs \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the de- rived attribute belongs \u2018attributeType\u2019 Optional String The (primitive) type of the derived attribute \u2018isOrdered\u2019 Optional String A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true) \u2018isUnique\u2019 Optional String A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true) \u2018isMany\u2019 Optional String The multiplicity of the derived attribute Table 8: \u2018derivation\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018derivation\u2019 \u2018language\u2019 Required String The language used to express the derivation logic. Available labguages in Hawk: org.hawk.epsilon.emc.EOLQueryEngine, org.hawk.orientdb.query.OrientSQLQueryEngine, org.hawk.epsilon.emc.EPLQueryEngine Table 9: \u2018indexedAttribute\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018indexedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the indexed attribute. \u2018typeName\u2019 Required String The name of the type to which the indexed attribute \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the indexed attribute belongs. Location \u00b6 Configuration files are expected to be located in the \u2018configuration\u2019 folder in the server\u2019s home directory. Each Hawk instance should have its own configuration file. There are no rules on how the file should be named. It is a good practice to include hawk instance name in the file name for easy recognition. How to use/enable Hawk instance configuration engine \u00b6 You can follow this video tutorial , or alternatively follow these steps: Download the hawk-server-*.zip file for your operating system and architecture of choice from Hawk Server With Configuration ) Create a configuration file for each instance required to run in the Hawk server. Edit configuration files: Set instance name, backend, delay Add list of plugins to be enabled Add metamodel file to location to be registered Add repositories that are to be indexed Add any required derived attributes Add any required indexed attributes Save the configuration files to the \u2018configuration\u2019 folder in the server\u2019s home directory (see figure 1) Perform any other configuration that are required by Hawk Server and start the server (by following instructions at Deploying-and-running-the-server ) Check if the hawk instances are added and running by typing \u2018hawkListInstances\u2019 in the server\u2019s command terminal: Usage Notes \u00b6 Deleting configuration files from the directory will not delete instances from the server. However, the server will not start those instances. To test Hawk server with Measure Platform, refer to Using HawkQueryMeasure to query Hawk instance running in Hawk Server","title":"File-based configuration"},{"location":"server/file-config/#format","text":"Configuration files are XML files that define hawk instance name and its configuration. An XML schema can be found at HawkServerConfigurationSchema.xsd . A sample configuration file can be found at Sample Configuration File The XML should include the following elements:","title":"Format"},{"location":"server/file-config/#table-1-list-of-xml-elements-in-configuration-file","text":"Element Name Parent Element Name multiplicity Value Description \u2018hawk\u2019 xml 1 None Root element \u2018delay\u2019 \u2018hawk\u2019 1 None Polling configuration \u2018plugins\u2019 \u2018hawk\u2019 0-1 None List of plugins (to be/that are) enabled \u2018plugin\u2019 \u2018plugins\u2019 0-* None Plugin name \u2018metamodels\u2019 \u2018hawk\u2019 0-1 None List of metamodels (to be/that are) registered \u2018metamodel\u2019 \u2018metamodels\u2019 0-* None Metamodel parameters \u2018repositories\u2019 \u2018hawk\u2019 0-1 None List of repositories (to be/that are) added \u2018repository\u2019 \u2018repositories\u2019 0-* None Repository parameters \u2018derivedAttributes\u2019 \u2018hawk\u2019 0-1 None List of derived attributes (to be/that are) added \u2018derivedAttribute\u2019 \u2018derivedAttributes\u2019 0-* None Derived attribute parameters \u2018derivation\u2019 \u2018derivedAttribute\u2019 0-1 None Derivation parameters \u2018logic\u2019 \u2018derivation\u2019 0-1 CDATA section An executable expression of the derivation logic in the language specified. \u2018indexedAttributes\u2019 \u2018hawk\u2019 0-1 None List of indexed attributes (to be/that are) added \u2018indexedAttribute\u2019 \u2018indexedAttributes\u2019 0-* None Indexed attribute parameters","title":"Table 1: List of XML elements in configuration file"},{"location":"server/file-config/#table-2-hawk-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018hawk\u2019 \u2018name\u2019 Required String The unique name of the new Hawk instance \u2018backend\u2019 Required String The name of the backend to be used (e.g.org.hawk.orientdb.OrientDatabase, org.hawk.orientdb.RemoteOrientDatabase)","title":"Table 2:     \u2018hawk\u2019 attributes"},{"location":"server/file-config/#table-3-delay-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018delay\u2019 \u2018min\u2019 Required String Minimum delay between periodic synchronization in milliseconds \u2018max\u2019 Required String Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization)","title":"Table 3:     \u2018delay\u2019 attributes"},{"location":"server/file-config/#table-4-plugin-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018plugin\u2019 \u2018name\u2019 Required String e.g. (org.hawk.modelio.exml.listeners.ModelioGraphChangeListener, org.hawk.modelio.exml.metamodel.ModelioMetaModelResourceFactory, org.hawk.modelio.exml.model.ModelioModelResourceFactory)","title":"Table 4:     \u2018plugin\u2019 attributes"},{"location":"server/file-config/#table-5-metamodel-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018metamodel\u2019 \u2018location\u2019 Optional String Location of metamodel file to be registered ~~\u2018uri\u2019~~ ~~Optional~~ ~~String~~ ~~Metamodel URI. This value is set automatically by server to list registered metamodels~~","title":"Table 5:     \u2018metamodel\u2019 attributes"},{"location":"server/file-config/#table-6-repository-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018repository\u2019 \u2018location\u2019 Required String Location of the repository \u2018type\u2019 Optional String The type of repository available repository types () \u2018user\u2019 Optional String Username for logging into the VCS \u2018pass\u2019 Optional String Password for logging into the VCS \u2018frozen\u2019 Optional String If the repository is frozen (true/false)","title":"Table 6:     \u2018repository\u2019 attributes"},{"location":"server/file-config/#table-7-derivedattribute-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018derivedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the derived attribute \u2018typeName\u2019 Required String The name of the type to which the derived attribute belongs \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the de- rived attribute belongs \u2018attributeType\u2019 Optional String The (primitive) type of the derived attribute \u2018isOrdered\u2019 Optional String A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true) \u2018isUnique\u2019 Optional String A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true) \u2018isMany\u2019 Optional String The multiplicity of the derived attribute","title":"Table 7:    \u2018derivedAttribute\u2019 attributes"},{"location":"server/file-config/#table-8-derivation-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018derivation\u2019 \u2018language\u2019 Required String The language used to express the derivation logic. Available labguages in Hawk: org.hawk.epsilon.emc.EOLQueryEngine, org.hawk.orientdb.query.OrientSQLQueryEngine, org.hawk.epsilon.emc.EPLQueryEngine","title":"Table 8:     \u2018derivation\u2019 attributes"},{"location":"server/file-config/#table-9-indexedattribute-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018indexedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the indexed attribute. \u2018typeName\u2019 Required String The name of the type to which the indexed attribute \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the indexed attribute belongs.","title":"Table 9:     \u2018indexedAttribute\u2019 attributes"},{"location":"server/file-config/#location","text":"Configuration files are expected to be located in the \u2018configuration\u2019 folder in the server\u2019s home directory. Each Hawk instance should have its own configuration file. There are no rules on how the file should be named. It is a good practice to include hawk instance name in the file name for easy recognition.","title":"Location"},{"location":"server/file-config/#how-to-useenable-hawk-instance-configuration-engine","text":"You can follow this video tutorial , or alternatively follow these steps: Download the hawk-server-*.zip file for your operating system and architecture of choice from Hawk Server With Configuration ) Create a configuration file for each instance required to run in the Hawk server. Edit configuration files: Set instance name, backend, delay Add list of plugins to be enabled Add metamodel file to location to be registered Add repositories that are to be indexed Add any required derived attributes Add any required indexed attributes Save the configuration files to the \u2018configuration\u2019 folder in the server\u2019s home directory (see figure 1) Perform any other configuration that are required by Hawk Server and start the server (by following instructions at Deploying-and-running-the-server ) Check if the hawk instances are added and running by typing \u2018hawkListInstances\u2019 in the server\u2019s command terminal:","title":"How to use/enable Hawk instance configuration engine"},{"location":"server/file-config/#usage-notes","text":"Deleting configuration files from the directory will not delete instances from the server. However, the server will not start those instances. To test Hawk server with Measure Platform, refer to Using HawkQueryMeasure to query Hawk instance running in Hawk Server","title":"Usage Notes"},{"location":"server/logging/","text":"Logging in Hawk is done through the Logback library. The specific logback.xml file is part of the org.hawk.service.server.logback plugin fragment. If you need to edit it, it is located in the plugins/org.hawk.service.server.logback_<HAWK RELEASE> folder from the main directory of the server. A typical configuration with Hawk logging at the DEBUG level, with time-based rolling and all messages going to the hawk.log file would look as follows: <configuration> <appender name= \"STDOUT\" class= \"ch.qos.logback.core.ConsoleAppender\" > <layout class= \"ch.qos.logback.classic.PatternLayout\" > <Pattern> %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n </Pattern> </layout> </appender> <appender name= \"FILE\" class= \"ch.qos.logback.core.rolling.RollingFileAppender\" > <file> hawk.log </file> <encoder class= \"ch.qos.logback.classic.encoder.PatternLayoutEncoder\" > <Pattern> %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n </Pattern> </encoder> <rollingPolicy class= \"ch.qos.logback.core.rolling.TimeBasedRollingPolicy\" > <!-- rollover daily --> <fileNamePattern> mylog-%d{yyyy-MM-dd}.%i.txt </fileNamePattern> <maxHistory> 60 </maxHistory> </rollingPolicy> </appender> <logger name= \"org.eclipse.jetty\" level= \"warn\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <logger name= \"ch.qos.logback\" level= \"error\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <logger name= \"org.apache.shiro\" level= \"error\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <!-- Change to \"error\" if Hawk produces too many messages for you --> <logger name= \"org.hawk\" level= \"debug\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> <appender-ref ref= \"FILE\" /> </logger> <root level= \"debug\" > <appender-ref ref= \"STDOUT\" /> </root> </configuration>","title":"Logging"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Eclipse Hawk \u00b6 Eclipse Hawk is a model indexing solution that can take models written with various technologies and turn them into graph databases, for easier and faster querying. Hawk is licensed under the Eclipse Public License 2.0 , with the GNU GPL 3.0 as secondary license. Any questions? Check the other sections on the left for how to get started and use Hawk. If you cannot find an answer there, feel free to ask at the official forum in Eclipse.org . Eclipse update sites \u00b6 The core components of Hawk, the OrientDB / Greycat backends, and the Thrift API clients can be installed from one of these Eclipse update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/updates/ Interim https://download.eclipse.org/hawk/2.2.0/updates/ If you are developing a custom Hawk server, you will find the Hawk server components in these update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/server/ Interim https://download.eclipse.org/hawk/2.2.0/server/ Plain libraries \u00b6 Many of the Eclipse Hawk components are available via Maven Central under the org.eclipse.hawk group ID: Site Repository Group ID Version Stable Maven Central org.eclipse.hawk 2.1.0 Interim OSSRH org.eclipse.hawk 2.2.0-SNAPSHOT Thrift API libraries \u00b6 There are Apache Thrift client libraries targeting C++, Java, JavaScript, and Python for talking with a Hawk server over its Thrift API. The Java libraries are available as Maven artefacts (see above). The C++ and JavaScript libraries can be downloaded from the links below. C++ libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-cpp-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-cpp-2.2.0.tar.gz JavaScript libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-js-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-js-2.2.0.tar.gz Python libraries \u00b6 Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-py-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-py-2.2.0.tar.gz Firewall-friendly artefacts \u00b6 For environments with corporate firewalls, the zipped update sites, zipped source code, and prebuilt CLI/server products for Linux, MacOS and Windows are available from the download folders: Folder Location Stable https://download.eclipse.org/hawk/2.1.0/ Interim https://download.eclipse.org/hawk/2.2.0/ Docker images \u00b6 Docker images for the Hawk Server are available from the hawk-docker project at Hawk Labs. These Docker images are rebuilt at least once a week, or whenever there are new changes in Hawk. Source code \u00b6 To access the source code, clone the Git repository for Hawk with your preferred client from: https://gitlab.eclipse.org/eclipse/hawk/hawk.git Committers will use a different URL: git@gitlab.eclipse.org:eclipse/hawk/hawk.git You can also read the code through your browser from the Eclipse Gitlab instance (which allows for archive downloads). Older versions \u00b6 Downloads for older versions are archived at Eclipse.org: 2.0.0","title":"Home"},{"location":"#eclipse-hawk","text":"Eclipse Hawk is a model indexing solution that can take models written with various technologies and turn them into graph databases, for easier and faster querying. Hawk is licensed under the Eclipse Public License 2.0 , with the GNU GPL 3.0 as secondary license. Any questions? Check the other sections on the left for how to get started and use Hawk. If you cannot find an answer there, feel free to ask at the official forum in Eclipse.org .","title":"Eclipse Hawk"},{"location":"#eclipse-update-sites","text":"The core components of Hawk, the OrientDB / Greycat backends, and the Thrift API clients can be installed from one of these Eclipse update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/updates/ Interim https://download.eclipse.org/hawk/2.2.0/updates/ If you are developing a custom Hawk server, you will find the Hawk server components in these update sites: Site Location Stable https://download.eclipse.org/hawk/2.1.0/server/ Interim https://download.eclipse.org/hawk/2.2.0/server/","title":"Eclipse update sites"},{"location":"#plain-libraries","text":"Many of the Eclipse Hawk components are available via Maven Central under the org.eclipse.hawk group ID: Site Repository Group ID Version Stable Maven Central org.eclipse.hawk 2.1.0 Interim OSSRH org.eclipse.hawk 2.2.0-SNAPSHOT","title":"Plain libraries"},{"location":"#thrift-api-libraries","text":"There are Apache Thrift client libraries targeting C++, Java, JavaScript, and Python for talking with a Hawk server over its Thrift API. The Java libraries are available as Maven artefacts (see above). The C++ and JavaScript libraries can be downloaded from the links below.","title":"Thrift API libraries"},{"location":"#c-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-cpp-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-cpp-2.2.0.tar.gz","title":"C++ libraries"},{"location":"#javascript-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-js-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-js-2.2.0.tar.gz","title":"JavaScript libraries"},{"location":"#python-libraries","text":"Site Location Stable http://download.eclipse.org/hawk/2.1.0/hawk-thrift-py-2.1.0.tar.gz Interim http://download.eclipse.org/hawk/2.2.0/hawk-thrift-py-2.2.0.tar.gz","title":"Python libraries"},{"location":"#firewall-friendly-artefacts","text":"For environments with corporate firewalls, the zipped update sites, zipped source code, and prebuilt CLI/server products for Linux, MacOS and Windows are available from the download folders: Folder Location Stable https://download.eclipse.org/hawk/2.1.0/ Interim https://download.eclipse.org/hawk/2.2.0/","title":"Firewall-friendly artefacts"},{"location":"#docker-images","text":"Docker images for the Hawk Server are available from the hawk-docker project at Hawk Labs. These Docker images are rebuilt at least once a week, or whenever there are new changes in Hawk.","title":"Docker images"},{"location":"#source-code","text":"To access the source code, clone the Git repository for Hawk with your preferred client from: https://gitlab.eclipse.org/eclipse/hawk/hawk.git Committers will use a different URL: git@gitlab.eclipse.org:eclipse/hawk/hawk.git You can also read the code through your browser from the Eclipse Gitlab instance (which allows for archive downloads).","title":"Source code"},{"location":"#older-versions","text":"Downloads for older versions are archived at Eclipse.org: 2.0.0","title":"Older versions"},{"location":"additional-resources/","text":"Screencasts \u00b6 We have several screencasts that show how to use Hawk and work on its code: Running of basic operations Use of advanced features Download and configuration of Hawk onto a fresh Eclipse Luna (Modeling Tools) distribution How to use Hawk to add Modelio metamodel(s) Use Hawk Server to auto-configure & start Hawk Instances Papers \u00b6 Hawk has been at the core of a long series of papers. These are listed in chronological order (from oldest to newest): Hawk: towards a scalable model indexing architecture A Framework to Benchmark NoSQL Data Stores for Large-Scale Model Persistence Towards Scalable Querying of Large-Scale Models Evaluation of Contemporary Graph Databases for Efficient Persistence of Large-Scale Models Towards Incremental Updates in Large-Scale Model Indexes Towards Scalable Model Indexing (PhD Thesis) Stress-testing remote model querying APIs for relational and graph-based stores Integration of a graph-based model indexer in commercial modelling tools Integration of Hawk for Model Metrics into the MEASURE Platform Hawk solutions to the TTC 2018 Social Media Case Scaling-up domain-specific modelling languages through modularity services Querying and Annotating Model Histories with Time-Aware Patterns Scalable modeling technologies in the wild: an experience report on wind turbines control applications development Book chapter: Monitoring model analytics over large repositories with Hawk and MEASURE Temporal Models for History-Aware Explainability Slides \u00b6 Hawk: indexado de modelos en bases de datos NoSQL - 90 minute slides in Spanish about the MONDO project and Hawk MODELS18 tutorial on NeoEMF and Hawk Related tools \u00b6 The HawkQuery SMM MEASURE library allows using Hawk servers as metric providers for the MEASURE platform.","title":"Additional resources"},{"location":"additional-resources/#screencasts","text":"We have several screencasts that show how to use Hawk and work on its code: Running of basic operations Use of advanced features Download and configuration of Hawk onto a fresh Eclipse Luna (Modeling Tools) distribution How to use Hawk to add Modelio metamodel(s) Use Hawk Server to auto-configure & start Hawk Instances","title":"Screencasts"},{"location":"additional-resources/#papers","text":"Hawk has been at the core of a long series of papers. These are listed in chronological order (from oldest to newest): Hawk: towards a scalable model indexing architecture A Framework to Benchmark NoSQL Data Stores for Large-Scale Model Persistence Towards Scalable Querying of Large-Scale Models Evaluation of Contemporary Graph Databases for Efficient Persistence of Large-Scale Models Towards Incremental Updates in Large-Scale Model Indexes Towards Scalable Model Indexing (PhD Thesis) Stress-testing remote model querying APIs for relational and graph-based stores Integration of a graph-based model indexer in commercial modelling tools Integration of Hawk for Model Metrics into the MEASURE Platform Hawk solutions to the TTC 2018 Social Media Case Scaling-up domain-specific modelling languages through modularity services Querying and Annotating Model Histories with Time-Aware Patterns Scalable modeling technologies in the wild: an experience report on wind turbines control applications development Book chapter: Monitoring model analytics over large repositories with Hawk and MEASURE Temporal Models for History-Aware Explainability","title":"Papers"},{"location":"additional-resources/#slides","text":"Hawk: indexado de modelos en bases de datos NoSQL - 90 minute slides in Spanish about the MONDO project and Hawk MODELS18 tutorial on NeoEMF and Hawk","title":"Slides"},{"location":"additional-resources/#related-tools","text":"The HawkQuery SMM MEASURE library allows using Hawk servers as metric providers for the MEASURE platform.","title":"Related tools"},{"location":"advanced-use/advanced-props/","text":"When querying through EOL, we can access several extra properties on any model element: eAllContents : collection with all the model elements directly or indirectly contained within this one. eContainer : returns the model element that contains this one, or null if it does not have a container. eContents : collection with all the model elements directly contained within this one. hawkFile : string with the repository paths of the files that this model element belongs to, separated by \";\". hawkFiles : collection with the repository paths of all the files that this model element belongs to. hawkIn : collection with all the model elements accessible through incoming references. hawkInEdges : collection with all the incoming references (see their attributes below). hawkOut : collection with all the model elements accessible through outgoing references. hawkOutEdges : collection with all the outgoing references (see their attributes below). hawkProxies : collection with all the proxy reference lists (see their properties below). hawkRepo : string with the URLs of the repositories that this model element belongs to, separated by \";\". hawkRepos : collection with all the repositories that this model element belongs to. hawkURIFragment : URI fragment of the model element within its file. There is also the isContainedWithin(repo, path) method for checking if an element is directly or indirectly contained within a certain file. References \u00b6 References are wrapped into entities of their own, with the following attributes: edge : raw edge, without wrapping. type / name : name of the reference. source / startNode : source of the reference. target / endNode : target of the reference. Proxy reference lists \u00b6 A proxy reference list represents all the unresolved links from a node to the elements in a certain file. These links may be unresolved as the file may be missing, or the specific elements may not be in the file. Proxy reference lists have the following fields: sourceNodeID : unique ID of the source model element node for these proxy references. targetFile : returns an object which refers to the target file. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. references : returns a collection with each of the proxy references to the missing file. Each reference has the following fields: edgeLabel : name of the proxy reference. isContainment : true if and only if the proxy reference is a containment reference (the target is contained within the source). isContainer : true if and only if the proxy reference is a container reference (the source is contained within the target). target : object which refers to the target of the proxy reference. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. fragment : string with the fragment that identifies the model element within the file. isFragmentBased : true if and only if the proxy reference is purely fragment-based (file path is irrelevant). This can be the case for some modelling technologies (e.g. Modelio).","title":"Advanced properties"},{"location":"advanced-use/advanced-props/#references","text":"References are wrapped into entities of their own, with the following attributes: edge : raw edge, without wrapping. type / name : name of the reference. source / startNode : source of the reference. target / endNode : target of the reference.","title":"References"},{"location":"advanced-use/advanced-props/#proxy-reference-lists","text":"A proxy reference list represents all the unresolved links from a node to the elements in a certain file. These links may be unresolved as the file may be missing, or the specific elements may not be in the file. Proxy reference lists have the following fields: sourceNodeID : unique ID of the source model element node for these proxy references. targetFile : returns an object which refers to the target file. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. references : returns a collection with each of the proxy references to the missing file. Each reference has the following fields: edgeLabel : name of the proxy reference. isContainment : true if and only if the proxy reference is a containment reference (the target is contained within the source). isContainer : true if and only if the proxy reference is a container reference (the source is contained within the target). target : object which refers to the target of the proxy reference. This object has several fields: repositoryURL : string with URL of the repository that should have this file. filePath : string with path within the repository for the file. fragment : string with the fragment that identifies the model element within the file. isFragmentBased : true if and only if the proxy reference is purely fragment-based (file path is irrelevant). This can be the case for some modelling technologies (e.g. Modelio).","title":"Proxy reference lists"},{"location":"advanced-use/graph-as-emf/","text":"In addition to regular querying, it is possible to use a Hawk graph as a model itself. To do so, use the \"File > New > Other > Hawk > Local Hawk Model Descriptor\" wizard and select the Hawk instance you want to access as a model. Once the wizard is finished, open the .localhawkmodel file to browse through it as an EMF model. You will need to ensure that the EPackages of the indexed models are part of your EMF package registry: normally Hawk should ensure this happens. For a Hawk index containing the GraBaTs 2009 set0.xmi file, it will look like this: The actual editor is a customized version of the Epsilon Exeed editor, which is based on the standard EMF reflective tree-based editor. The contents of the graph are navigated lazily, so we can open huge models very quickly and navigate through them. The editor also provides additional \"Custom\" actions when we right click on a top-level node (usually labelled with URLs). Currently, it supports an efficient Fetch by EClass method, that allows fetching all the instances of a type immediately, without having to load the rest of the model. Future versions of Hawk may expose additional operations through this menu. Finally, the EMF resource can be used normally from any EMF-based tools (e.g. transformation engines). However, to make the most out of the resources it will be necessary to extend the tools to have them integrate the efficient graph-based operations that are not part of the EMF Resource interface.","title":"Graph as EMF model"},{"location":"advanced-use/meta-queries/","text":"Hawk extends the regular EOL facilities to be able to query the metamodels registered within the instance: Model.files lists all the files indexed by Hawk (may be limited through the context). Model.metamodels lists all the metamodels registered in Hawk ( EPackage instances for EMF). Model.proxies lists all the proxy reference lists present in the graph. Each proxy reference list is a collection of the unresolved references from a model element node to the elements of a particular file. For details, please consult the advanced properties page . Model.types lists all the types registered in Hawk ( EClass instances for EMF). Model.getFileOf(obj) retrieves the first file containing the object obj . Model.getFilesOf(obj) retrieves all the files containing the object obj . Model.getProxies(repositoryPrefix) lists all the proxy reference lists for files in repositories matching the specified prefix. Model.getTypeOf(obj) retrieves the type of the object obj . Metamodels \u00b6 For a metamodel mm , these attributes are available: mm.dependencies lists the metamodels this metamodel depends on (usually at least the Ecore metamodel for EMF-based metamodels). mm.metamodelType is the type of metamodel that was registered. mm.node returns the underlying IGraphNode . mm.resource retrieves the original string representation for this metamodel (the original .ecore file for EMF). mm.types lists the types defined in this metamodel. mm.uri is the namespace URI of the metamodel. Types \u00b6 For a type t , these attributes are available: t.all retrieves all instances of that type efficiently (includes subtypes). t.attributes lists the attributes of the type, as slots (see below). t.features lists the attributes and references of the type. t.metamodel retrieves the metamodel that defines the type. t.name retrieves the name of the type. t.node returns the underlying IGraphNode . t.references lists the references of the type, as slots. Slots \u00b6 For a slot sl , these attributes are available: sl.name : name of the slot. sl.type : type of the value of the slot. sl.isMany : true if this is a multi-valued slot. sl.isOrdered : true if the values should follow some order. sl.isAttribute : true if this is an attribute slot. sl.isReference : true if this is a reference slot. sl.isUnique : true if the value for this slot should be unique within its model. Files \u00b6 For a file f , these attributes are available: f.contents : returns all the model elements in the file. f.node : returns the underlying IGraphNode . f.path : returns the path of the file within the repository (e.g. /input.xmi ). f.repository : returns the URL of the repository (e.g. file:///home/myuser/models ). f.roots : returns the root model elements in the file.","title":"Meta-level queries"},{"location":"advanced-use/meta-queries/#metamodels","text":"For a metamodel mm , these attributes are available: mm.dependencies lists the metamodels this metamodel depends on (usually at least the Ecore metamodel for EMF-based metamodels). mm.metamodelType is the type of metamodel that was registered. mm.node returns the underlying IGraphNode . mm.resource retrieves the original string representation for this metamodel (the original .ecore file for EMF). mm.types lists the types defined in this metamodel. mm.uri is the namespace URI of the metamodel.","title":"Metamodels"},{"location":"advanced-use/meta-queries/#types","text":"For a type t , these attributes are available: t.all retrieves all instances of that type efficiently (includes subtypes). t.attributes lists the attributes of the type, as slots (see below). t.features lists the attributes and references of the type. t.metamodel retrieves the metamodel that defines the type. t.name retrieves the name of the type. t.node returns the underlying IGraphNode . t.references lists the references of the type, as slots.","title":"Types"},{"location":"advanced-use/meta-queries/#slots","text":"For a slot sl , these attributes are available: sl.name : name of the slot. sl.type : type of the value of the slot. sl.isMany : true if this is a multi-valued slot. sl.isOrdered : true if the values should follow some order. sl.isAttribute : true if this is an attribute slot. sl.isReference : true if this is a reference slot. sl.isUnique : true if the value for this slot should be unique within its model.","title":"Slots"},{"location":"advanced-use/meta-queries/#files","text":"For a file f , these attributes are available: f.contents : returns all the model elements in the file. f.node : returns the underlying IGraphNode . f.path : returns the path of the file within the repository (e.g. /input.xmi ). f.repository : returns the URL of the repository (e.g. file:///home/myuser/models ). f.roots : returns the root model elements in the file.","title":"Files"},{"location":"advanced-use/oomph/","text":"Oomph has a feature that synchronizes preferences across workspaces (see bug 490549 ). This can be a problem if you expect different workspaces to have different Hawk indexes. If so, you should reconfigure Oomph so it does not record the /instance/org.hawk.osgiserver preferences node at the \"User\" and \"Installation\" levels. To do this, go to \"Window > Preferences\", select \"Oomph > Setup Tasks > Preference Recorder\", check \"Record into\", select \"User\" and make sure /instance/org.hawk.osgiserver/config either does not appear or is unchecked . It should be the same for \"Installation\" and \"Workspace\".","title":"Oomph and Hawk"},{"location":"advanced-use/temporal-queries/","text":"The latest versions of Hawk have the capability to index every version of all the models in the locations being monitored. To enable this capability, your Hawk index must meet certain conditions: You must be using a time-aware backend (currently, Greycat). You must be using the time-aware updater (TimeAwareModelUpdater) and not the standard one. You must be using the time-aware indexer factory and not the standard one (TimeAwareHawkFactory). You must query the index with a time-aware query language: org.hawk.timeaware.queries.TimeAwareEOLQueryEngine org.hawk.timeaware.queries.TimelineEOLQueryEngine If you meet these constraints, you can index a SVN repository with models and Hawk will turn the full history of every model into an integrated temporal graph database, or index a workspace/local folder and have Hawk remember the history of every model from then onwards. You will be able to query this temporal graph through an extension of Hawk's EOL dialect. This functionality was first discussed in our MRT 2018 paper, \"Reflecting on the past and the present with temporal graph-based models\". Data model \u00b6 The usual type -> model element graph in Hawk is extended to give both types and model elements their own histories. The histories are defined as follows: Types are immortal: they are created at the first endpoint in the graph and last to the \"end of time\" of the graph. There is a new version whenever an instance of the type is created or destroyed. Model elements are created at a certain timepoint, and either survive or are destroyed at another timepoint. Model elements are assumed to have a persistent identity: either its natural/artificial identifier, or its location within the model. New versions are produced when an attribute or a reference changes. Timepoints are provided by the Hawk connectors, and they tend to be commit timestamps or file timestamps. In SVN, these are commit timestamps to millisecond precision. Basic history traversal primitives \u00b6 The actual primitives are quite simple. In the time-aware dialect of Hawk, types and model elements expose the following additional attributes and operations: x.versions : returns the sequence of all versions for x , from newest to oldest x.getVersionsBetween(from, to) : versions within a range of timepoints x.getVersionsFrom(from) : versions from a timepoint (included) x.getVersionsUpTo(from) : versions up to a timepoint (included) x.earliest , x.latest : earliest / latest version x.next , x.prev / x.previous : next / previous version x.time : version timepoint Temporal assertions \u00b6 It is possible to evaluate assertions over the history of a type or model element: x.always(version | predicate over version) : true if and only if (\"iff\") the predicate is true for every version of x . x.never(version | predicate over version) : true iff the predicate is false for every version of x . x.eventually(version | predicate over version) : true iff the predicate is true for some version of x . x.eventuallyAtLeast(version | predicate over version, count) : true iff the predicate is true in at least count versions of x . x.eventuallyAtMost(version | predicate over version, count) : true iff the predicate is true in at least one version and at most count versions of x . Scoping views (predicate-based) \u00b6 The versions in scope for the above assertions and primitives can be limited with: x.since(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards. This esentially imposes a left-closed version interval. x.after(version | predicate over version) will return the type/model element in the timepoint immediately after the oldest timepoint for which the predicate holds, or null if it does not exist. It is essentially a variant of x.since that implements a left-open interval. x.until(version | predicate over version) will return the the same type/model element, but it will only report versions up to and including the first one for which the predicate holds, or null if such a version does not exist. This implements a right-closed version interval. x.before(version | predicate over version) will return the same type/model element, but it will only report versions before (excluding) the first one for which the predicate holds, or null if such a version does not exist. This implements a right-open interval. x.when(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards that match the predicate. This is a left-closed, filtered interval. Scoping views (context-based) \u00b6 You can also limit the available versions from an existing type / model element: x.sinceThen : version of x that will only report the versions from x onwards (included). x.afterThen : next version of x that will only report the versions after x (excluded). null if a next version does not exist. x.untilThen : version of x that will only report the versions up to x (included). x.beforeThen : previous version of x that will only report the versions before x (excluded). null if a previous version does not exist. You can undo the scoping with .unscoped . This will give you the same model element or type, but with all the versions available once more. Scoping views (based on derived attributes) \u00b6 Some of the events we may be interested in may be very rare. In long histories, it may be very expensive to find such rare events by iterating over all the versions of a model element. In these cases, it is possible to define a derived Boolean attribute (e.g. HasManyChildren for a Tree , with definiton return self.children.size > 100; ) on a type, and then use these additional operations: x.whenAnnotated('AttributeName') : returns a view of the model element x that exposes all the versions when the derived attribute named AttributeName defined on the type of x was true . The view will be at the earliest timepoint when this happened. x.sinceAnnotated('AttributeName') : equivalent to since , but using the derived attribute AttributeName . x.afterAnnotated('AttributeName') : equivalent to after . See above. x.untilAnnotated('AttributeName') : equivalent to until . See above. x.beforeAnnotated('AttributeName') : equivalent to before . See above. IMPORTANT : until #83 is resolved, you will need to define these derived attributes before you index any model versions. Global operations on the model \u00b6 The Model global reference is extended with new operations: Model.allInstancesNow returns all instances of the model at the timepoint equal to current system time. Model.allInstancesAt(timepoint) returns all instances of the model at the specified timepoint, measured in the integer amount of milliseconds elapsed since the epoch. Model.getRepository(object) will return a node representing the repository (VCS) that the object belongs to at its current timepoint. From the returned node, you may retrieve the .revision (SVN revision, folder timestamp or Git SHA-1), and the .message associated with the corresponding revision. Some examples \u00b6 A simple query to find the number of instances of X in the latest version of the model would be: return X.latest.all.size; If we want to do find the second last time that instances of X were created, we could write something like: return X.latest.prev.time; If we want to find an X that at some point had y greater than 0 and still survives to the latest revision, we could write something like: return X.latest.all.select(x|x.versions.exists(vx|vx.y > 0)); More advanced queries can be found in the Git repository for the MRT 2018 experiment tool . Timeline queries \u00b6 If you want to obtain the results of a certain query for all versions of a model, you can use the TimelineEOLQueryEngine instead. This operates by repeating the same query while changing the global timepoint of the graph, so you can write your query as a normal one and see how it evolves over time. For instance, if using return Model.allInstances.size; , you would see how the number of instances evolved over the various versions of the graph. NOTE: due to current implementation restrictions, this will only process versions where type nodes changed (i.e. objects were created or deleted). We plan to lift this restriction in the near future. Current limitations \u00b6 Subtree contexts, file-first/derived allOf and traversal scoping are not yet implemented for this query engine. File/repository patterns do work. Derived features will only work if added before any VCSes are added, and the impact of adding multiple VCS with their own histories has not been tested yet. Please make sure to report any issues!","title":"Temporal queries"},{"location":"advanced-use/temporal-queries/#data-model","text":"The usual type -> model element graph in Hawk is extended to give both types and model elements their own histories. The histories are defined as follows: Types are immortal: they are created at the first endpoint in the graph and last to the \"end of time\" of the graph. There is a new version whenever an instance of the type is created or destroyed. Model elements are created at a certain timepoint, and either survive or are destroyed at another timepoint. Model elements are assumed to have a persistent identity: either its natural/artificial identifier, or its location within the model. New versions are produced when an attribute or a reference changes. Timepoints are provided by the Hawk connectors, and they tend to be commit timestamps or file timestamps. In SVN, these are commit timestamps to millisecond precision.","title":"Data model"},{"location":"advanced-use/temporal-queries/#basic-history-traversal-primitives","text":"The actual primitives are quite simple. In the time-aware dialect of Hawk, types and model elements expose the following additional attributes and operations: x.versions : returns the sequence of all versions for x , from newest to oldest x.getVersionsBetween(from, to) : versions within a range of timepoints x.getVersionsFrom(from) : versions from a timepoint (included) x.getVersionsUpTo(from) : versions up to a timepoint (included) x.earliest , x.latest : earliest / latest version x.next , x.prev / x.previous : next / previous version x.time : version timepoint","title":"Basic history traversal primitives"},{"location":"advanced-use/temporal-queries/#temporal-assertions","text":"It is possible to evaluate assertions over the history of a type or model element: x.always(version | predicate over version) : true if and only if (\"iff\") the predicate is true for every version of x . x.never(version | predicate over version) : true iff the predicate is false for every version of x . x.eventually(version | predicate over version) : true iff the predicate is true for some version of x . x.eventuallyAtLeast(version | predicate over version, count) : true iff the predicate is true in at least count versions of x . x.eventuallyAtMost(version | predicate over version, count) : true iff the predicate is true in at least one version and at most count versions of x .","title":"Temporal assertions"},{"location":"advanced-use/temporal-queries/#scoping-views-predicate-based","text":"The versions in scope for the above assertions and primitives can be limited with: x.since(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards. This esentially imposes a left-closed version interval. x.after(version | predicate over version) will return the type/model element in the timepoint immediately after the oldest timepoint for which the predicate holds, or null if it does not exist. It is essentially a variant of x.since that implements a left-open interval. x.until(version | predicate over version) will return the the same type/model element, but it will only report versions up to and including the first one for which the predicate holds, or null if such a version does not exist. This implements a right-closed version interval. x.before(version | predicate over version) will return the same type/model element, but it will only report versions before (excluding) the first one for which the predicate holds, or null if such a version does not exist. This implements a right-open interval. x.when(version | predicate over version) will return the type/model element in the oldest timepoint since that of x for which the predicate holds, or null if it does not exist. The returned type/model element will only report versions from its timepoint onwards that match the predicate. This is a left-closed, filtered interval.","title":"Scoping views (predicate-based)"},{"location":"advanced-use/temporal-queries/#scoping-views-context-based","text":"You can also limit the available versions from an existing type / model element: x.sinceThen : version of x that will only report the versions from x onwards (included). x.afterThen : next version of x that will only report the versions after x (excluded). null if a next version does not exist. x.untilThen : version of x that will only report the versions up to x (included). x.beforeThen : previous version of x that will only report the versions before x (excluded). null if a previous version does not exist. You can undo the scoping with .unscoped . This will give you the same model element or type, but with all the versions available once more.","title":"Scoping views (context-based)"},{"location":"advanced-use/temporal-queries/#scoping-views-based-on-derived-attributes","text":"Some of the events we may be interested in may be very rare. In long histories, it may be very expensive to find such rare events by iterating over all the versions of a model element. In these cases, it is possible to define a derived Boolean attribute (e.g. HasManyChildren for a Tree , with definiton return self.children.size > 100; ) on a type, and then use these additional operations: x.whenAnnotated('AttributeName') : returns a view of the model element x that exposes all the versions when the derived attribute named AttributeName defined on the type of x was true . The view will be at the earliest timepoint when this happened. x.sinceAnnotated('AttributeName') : equivalent to since , but using the derived attribute AttributeName . x.afterAnnotated('AttributeName') : equivalent to after . See above. x.untilAnnotated('AttributeName') : equivalent to until . See above. x.beforeAnnotated('AttributeName') : equivalent to before . See above. IMPORTANT : until #83 is resolved, you will need to define these derived attributes before you index any model versions.","title":"Scoping views (based on derived attributes)"},{"location":"advanced-use/temporal-queries/#global-operations-on-the-model","text":"The Model global reference is extended with new operations: Model.allInstancesNow returns all instances of the model at the timepoint equal to current system time. Model.allInstancesAt(timepoint) returns all instances of the model at the specified timepoint, measured in the integer amount of milliseconds elapsed since the epoch. Model.getRepository(object) will return a node representing the repository (VCS) that the object belongs to at its current timepoint. From the returned node, you may retrieve the .revision (SVN revision, folder timestamp or Git SHA-1), and the .message associated with the corresponding revision.","title":"Global operations on the model"},{"location":"advanced-use/temporal-queries/#some-examples","text":"A simple query to find the number of instances of X in the latest version of the model would be: return X.latest.all.size; If we want to do find the second last time that instances of X were created, we could write something like: return X.latest.prev.time; If we want to find an X that at some point had y greater than 0 and still survives to the latest revision, we could write something like: return X.latest.all.select(x|x.versions.exists(vx|vx.y > 0)); More advanced queries can be found in the Git repository for the MRT 2018 experiment tool .","title":"Some examples"},{"location":"advanced-use/temporal-queries/#timeline-queries","text":"If you want to obtain the results of a certain query for all versions of a model, you can use the TimelineEOLQueryEngine instead. This operates by repeating the same query while changing the global timepoint of the graph, so you can write your query as a normal one and see how it evolves over time. For instance, if using return Model.allInstances.size; , you would see how the number of instances evolved over the various versions of the graph. NOTE: due to current implementation restrictions, this will only process versions where type nodes changed (i.e. objects were created or deleted). We plan to lift this restriction in the near future.","title":"Timeline queries"},{"location":"advanced-use/temporal-queries/#current-limitations","text":"Subtree contexts, file-first/derived allOf and traversal scoping are not yet implemented for this query engine. File/repository patterns do work. Derived features will only work if added before any VCSes are added, and the impact of adding multiple VCS with their own histories has not been tested yet. Please make sure to report any issues!","title":"Current limitations"},{"location":"basic-use/core-concepts/","text":"Core concepts and general usage \u00b6 Components \u00b6 Hawk is an extensible system. Currently, it contains the following kinds of components: Type Role Current implementations Change listeners React to changes in the graph produced by the updaters Tracing, Validation Graph backends Integrate database technologies Neo4j , OrientDB , Greycat Model drivers Integrate modelling technologies Ecore , BPMN , Modelio , IFC2x3/IFC4 in this repo , and UML2 Query languages Translate high-level queries into efficient graph queries Epsilon Object Language , Epsilon Pattern Language , OrientDB SQL Updaters Update the graph based on the detected changes in the models and metamodels Built-in VCS managers Integrate file-based model repositories Local folders, SVN repositories, Git repositories, Eclipse workspaces, HTTP files General usage \u00b6 Using Hawk generally involves these steps: Create a new Hawk index, based on a specific backend (e.g. Neo4j or OrientDB). Add the required metamodels to the index. Add the model repositories to be monitored. Wait for the initial batch insert (may take some time in large repositories). Add the desired indexed and derived attributes. Perform fast and efficient queries on the graph, using one of the supported query languages (see table above). In the following sections, we will show how to perform these steps. Managing indexes with the Hawk view \u00b6 To manage and use Hawk indexes, first open the \"Hawk\" Eclipse view, using \"Window > Show View > Other... > Hawk > Hawk\". It should look like this: Hawk indexes are queried and managed from this view. From left to right, the buttons are: Query: opens the query dialog. Run: starts a Hawk index if it was stopped. Stop: stops a Hawk index if it was running. Sync: request the Hawk index to check the indexed repositories immediately. Delete: removes an index from the Hawk view, without deleting the actual database (it can be usually recovered later using the \"Import\" button). To remove a local index completely, select it and press Shift+Delete . New: creates a new index (more info below). Import: imports a Hawk index from a factory. Hawk itself only provides a \"local\" factory that looks at the subdirectories of the current Eclipse workspace. Configure: opens the index configuration dialog, which allows for managing the registered metamodels, the repositories to be indexed, the attributes to be derived and the attributes to be indexed. Creating a new index \u00b6 To create a new index, open the Hawk view and use the \"New\" button to open this dialog: The dialog requires these fields: Name: a descriptive name for the index. Only used as an identifier. Instance type: Hawk only supports local instances, but mondo-integration can add support for remote instances. Local storage folder: folder that will store the actual database. If the folder exists, Hawk will reuse that database instead of creating a new one. Remote location: only used for the remote instances in mondo-integrtion . Enabled plugins: list of plugins that are currently enabled in Hawk. Back-end: database backend to be used (currently either Neo4j or OrientDB). Min/max delay: minimum and maximum delays in milliseconds between synchronisations. Hawk will start at the minimum value: every time it does not find any changes, it will double the delay up to the maximum value. If it finds a change, it will reset back to the minimum value. Periodic synchronisation can be completely disabled by changing the minimum and maximum delays to 0: in this mode, Hawk will only synchronise on startup, when a repository is added or when the user requests it manually. Once these fields have been filled in, Hawk will create and set up the index in a short period. Managing metamodels \u00b6 After creating the index, the next step is to register the metamodels of the models that are going to be indexed. To do this, select the index in the Hawk view and either double click it or click on the \"Configure\" button. The configure dialog will open: The configure dialog has several tabs. For managing metamodels, we need to go to the \"Metamodels\" tab. It will list the URIs of the currently registered metamodels. If a metamodel we need is not listed there, we can use the \"Add\" button to provide Hawk with the appropriate file to be indexed (e.g. the .ecore file for EMF-based models, or the metamodel-descriptor.xml for Modelio-based models). We can also \"Remove\" metamodels: this will remove all dependent models and metamodels as well. To try out Hawk, we recommend adding the JDTAST.ecore metamodel, which was used in the GraBaTs 2009 case study from AtlanMod . For Modelio metamodels, use the metamodel-descriptor.xml for Modelio 3.6 projects (for older projects, use the older descriptors included as metamodel_*.xml files in the Modelio 3.6 sources ). Keep in mind that metamodels may have dependencies to others. You will need to either add all metamodels at once, or add each metamodel after those it depends upon. If adding all the metamodels at once, Hawk will rearrange their addition taking into account their mutual dependencies. Note: the EMF driver can parse regular Ecore metamodels with the .ecore extension. Note: regarding the Modelio metamodel-descriptor.xml files, you can find those as part of the Modelio source code . Managing repositories \u00b6 Having added the metamodels of the models to be indexed, the following step is to add the repositories to be indexed. To do so, go to the \"Indexed Locations\" tab of the Hawk configure dialog, and use the \"Add\" button. Hawk will present the following dialog: The fields to be used are as follows: Type: type of repository to be indexed. Location: URL or path to the repository. For local folders, it is recommended to use the \"Browse...\" button to produce the adequate `file://** URL. For SVN, it is best to copy and paste the full URL. For Git repositories, you can use a path to the root folder of your Git clone, or a file://path/to/repo[?branch=BRANCH] URL (where the optional ?branch=BRANCH part can be used to specify a branch other than the one currently checked out). For Workspace repositories, the location is irrelevant: selecting any directory from \"Browse...\" will work just the same. User + pass: for private SVN repositories, these will be the username and password to be used to connect to the repository. Hawk will store the password on the Eclipse secure storage. To try out Hawk, after adding the JDTAST.ecore metamodel from the previous section, we recommend adding a folder with a copy of the set0.xmi file. It has around 70k model elements. To watch over the indexing process, look at the \"Error Log\" view or run Eclipse with the -console option. The supported file extensions are as follows: Driver Extensions EMF .xmi , .model , any extensions in the EMF extension factory map, any extensions mentioned through the org.hawk.emf.model.extraExtensions Java system property (e.g. -Dorg.hawk.emf.model.extraExtensions=.railway,.rail ). UML2 .uml . .profile.uml files can be indexed normally and also registered as metamodels. BPMN .bpmn , .bpmn2 . Modelio .exml , .ramc . Parses mmversion.dat internally for metadata. IFC .ifc , .ifcxml , .ifc.txt , .ifcxml.txt , .ifc.zip , .ifczip . Managing indexed attributes \u00b6 Simply indexing the models into the graph will already speed up considerably some common queries, such as finding all the instances of a type: in Hawk, this is done through direct edge traversal instead of going through the entire model. However, queries that filter model elements through the value of their attributes will need additional indexing to be set up. For instance, if we wanted to speed up return Class.all.selectOne(c|c.name='MyClass'); (which returns the class named \"MyClass\"), we would need to index the name attribute in the Class type. To do so, we need to go to the Hawk configure dialog, select the \"Indexed Attributes\" tab and press the \"Add\" button. This dialog will open: Its fields are as follows: Metamodel URI: the URI of the metamodel that has the type we want. Type Name: the name of the type (here \"Class\"). Attribute Name: the name of the attribute to be indexed (here \"name\"). Please allow some time after the dialog is closed to have Hawk generate the index. Currently, Hawk can index attributes with strings, booleans and numbers. Indexing will speed up not only = , but also > and all the other relational operators. Managing derived attributes \u00b6 Sometimes we will need to filter model elements through a piece of information that is not directly stored among its attributes, but is rather computed from them. To speed up the process, Hawk can keep precompute these derived attributes in the graph, keeping them up to date and indexing them. For instance, if we wanted to quickly filter UML classes by their number of operations, we would go to the Hawk configure dialog, select the \"Derived Attributes\" tab and click on the \"Add\" button. This dialog would appear: The fields are as follows: Metamodel URI: the URI of the metamodel with the type to be extended. Type Name: the name of the type we are going to extend. Attribute Name: the name of the new derived attribute (should be unique). Attribute Type: the type of the new derived attribute. isMany: true if this is a collection of values, false otherwise. isOrdered: true if this is an ordered collection of values, false otherwise. isUnique: true if the value should provide a unique identifier, false otherwise. Derivation Language: query language that the derivation logic will be written on. EOL is the default choice. Derivation Logic: expression in the chosen language that will compute the value. Hawk provides the self variable to access the model element being extended. For this particular example, we'd set the fields like this: Metamodel URI: the URI of the UML metamodel. Type Name: Class. Attribute Name: ownedOperationCount. Attribute Type: Integer. isMany, isOrdered, isUnique: false. Derivation Language: EOLQueryEngine. Derivation Logic: return self.ownedOperation.size; . After pressing OK, Hawk will spend some time computing the derived attribute and indexing the value. After that, queries such as return Class.all.select(c|c.ownedOperationCount > 20); will complete much faster. Querying the graph \u00b6 To query the indexed models, use the \"Query\" button of the Hawk view. This dialog will open: The actual query can be entered through the \"Query\" field manually, or loaded from a file using the \"Query File\" button. The query should be written in the language selected in \"Query Engine\". The scope of the query can be limited using the \"Context Repositories\" and \"Context Files\" fields: for instance, using set1.xmi on the \"Context Files\" field would limit it to the contents of the set1.xmi file. Running the query with \"Run Query\" button will place the results on the \"Result\" field.","title":"Core concepts"},{"location":"basic-use/core-concepts/#core-concepts-and-general-usage","text":"","title":"Core concepts and general usage"},{"location":"basic-use/core-concepts/#components","text":"Hawk is an extensible system. Currently, it contains the following kinds of components: Type Role Current implementations Change listeners React to changes in the graph produced by the updaters Tracing, Validation Graph backends Integrate database technologies Neo4j , OrientDB , Greycat Model drivers Integrate modelling technologies Ecore , BPMN , Modelio , IFC2x3/IFC4 in this repo , and UML2 Query languages Translate high-level queries into efficient graph queries Epsilon Object Language , Epsilon Pattern Language , OrientDB SQL Updaters Update the graph based on the detected changes in the models and metamodels Built-in VCS managers Integrate file-based model repositories Local folders, SVN repositories, Git repositories, Eclipse workspaces, HTTP files","title":"Components"},{"location":"basic-use/core-concepts/#general-usage","text":"Using Hawk generally involves these steps: Create a new Hawk index, based on a specific backend (e.g. Neo4j or OrientDB). Add the required metamodels to the index. Add the model repositories to be monitored. Wait for the initial batch insert (may take some time in large repositories). Add the desired indexed and derived attributes. Perform fast and efficient queries on the graph, using one of the supported query languages (see table above). In the following sections, we will show how to perform these steps.","title":"General usage"},{"location":"basic-use/core-concepts/#managing-indexes-with-the-hawk-view","text":"To manage and use Hawk indexes, first open the \"Hawk\" Eclipse view, using \"Window > Show View > Other... > Hawk > Hawk\". It should look like this: Hawk indexes are queried and managed from this view. From left to right, the buttons are: Query: opens the query dialog. Run: starts a Hawk index if it was stopped. Stop: stops a Hawk index if it was running. Sync: request the Hawk index to check the indexed repositories immediately. Delete: removes an index from the Hawk view, without deleting the actual database (it can be usually recovered later using the \"Import\" button). To remove a local index completely, select it and press Shift+Delete . New: creates a new index (more info below). Import: imports a Hawk index from a factory. Hawk itself only provides a \"local\" factory that looks at the subdirectories of the current Eclipse workspace. Configure: opens the index configuration dialog, which allows for managing the registered metamodels, the repositories to be indexed, the attributes to be derived and the attributes to be indexed.","title":"Managing indexes with the Hawk view"},{"location":"basic-use/core-concepts/#creating-a-new-index","text":"To create a new index, open the Hawk view and use the \"New\" button to open this dialog: The dialog requires these fields: Name: a descriptive name for the index. Only used as an identifier. Instance type: Hawk only supports local instances, but mondo-integration can add support for remote instances. Local storage folder: folder that will store the actual database. If the folder exists, Hawk will reuse that database instead of creating a new one. Remote location: only used for the remote instances in mondo-integrtion . Enabled plugins: list of plugins that are currently enabled in Hawk. Back-end: database backend to be used (currently either Neo4j or OrientDB). Min/max delay: minimum and maximum delays in milliseconds between synchronisations. Hawk will start at the minimum value: every time it does not find any changes, it will double the delay up to the maximum value. If it finds a change, it will reset back to the minimum value. Periodic synchronisation can be completely disabled by changing the minimum and maximum delays to 0: in this mode, Hawk will only synchronise on startup, when a repository is added or when the user requests it manually. Once these fields have been filled in, Hawk will create and set up the index in a short period.","title":"Creating a new index"},{"location":"basic-use/core-concepts/#managing-metamodels","text":"After creating the index, the next step is to register the metamodels of the models that are going to be indexed. To do this, select the index in the Hawk view and either double click it or click on the \"Configure\" button. The configure dialog will open: The configure dialog has several tabs. For managing metamodels, we need to go to the \"Metamodels\" tab. It will list the URIs of the currently registered metamodels. If a metamodel we need is not listed there, we can use the \"Add\" button to provide Hawk with the appropriate file to be indexed (e.g. the .ecore file for EMF-based models, or the metamodel-descriptor.xml for Modelio-based models). We can also \"Remove\" metamodels: this will remove all dependent models and metamodels as well. To try out Hawk, we recommend adding the JDTAST.ecore metamodel, which was used in the GraBaTs 2009 case study from AtlanMod . For Modelio metamodels, use the metamodel-descriptor.xml for Modelio 3.6 projects (for older projects, use the older descriptors included as metamodel_*.xml files in the Modelio 3.6 sources ). Keep in mind that metamodels may have dependencies to others. You will need to either add all metamodels at once, or add each metamodel after those it depends upon. If adding all the metamodels at once, Hawk will rearrange their addition taking into account their mutual dependencies. Note: the EMF driver can parse regular Ecore metamodels with the .ecore extension. Note: regarding the Modelio metamodel-descriptor.xml files, you can find those as part of the Modelio source code .","title":"Managing metamodels"},{"location":"basic-use/core-concepts/#managing-repositories","text":"Having added the metamodels of the models to be indexed, the following step is to add the repositories to be indexed. To do so, go to the \"Indexed Locations\" tab of the Hawk configure dialog, and use the \"Add\" button. Hawk will present the following dialog: The fields to be used are as follows: Type: type of repository to be indexed. Location: URL or path to the repository. For local folders, it is recommended to use the \"Browse...\" button to produce the adequate `file://** URL. For SVN, it is best to copy and paste the full URL. For Git repositories, you can use a path to the root folder of your Git clone, or a file://path/to/repo[?branch=BRANCH] URL (where the optional ?branch=BRANCH part can be used to specify a branch other than the one currently checked out). For Workspace repositories, the location is irrelevant: selecting any directory from \"Browse...\" will work just the same. User + pass: for private SVN repositories, these will be the username and password to be used to connect to the repository. Hawk will store the password on the Eclipse secure storage. To try out Hawk, after adding the JDTAST.ecore metamodel from the previous section, we recommend adding a folder with a copy of the set0.xmi file. It has around 70k model elements. To watch over the indexing process, look at the \"Error Log\" view or run Eclipse with the -console option. The supported file extensions are as follows: Driver Extensions EMF .xmi , .model , any extensions in the EMF extension factory map, any extensions mentioned through the org.hawk.emf.model.extraExtensions Java system property (e.g. -Dorg.hawk.emf.model.extraExtensions=.railway,.rail ). UML2 .uml . .profile.uml files can be indexed normally and also registered as metamodels. BPMN .bpmn , .bpmn2 . Modelio .exml , .ramc . Parses mmversion.dat internally for metadata. IFC .ifc , .ifcxml , .ifc.txt , .ifcxml.txt , .ifc.zip , .ifczip .","title":"Managing repositories"},{"location":"basic-use/core-concepts/#managing-indexed-attributes","text":"Simply indexing the models into the graph will already speed up considerably some common queries, such as finding all the instances of a type: in Hawk, this is done through direct edge traversal instead of going through the entire model. However, queries that filter model elements through the value of their attributes will need additional indexing to be set up. For instance, if we wanted to speed up return Class.all.selectOne(c|c.name='MyClass'); (which returns the class named \"MyClass\"), we would need to index the name attribute in the Class type. To do so, we need to go to the Hawk configure dialog, select the \"Indexed Attributes\" tab and press the \"Add\" button. This dialog will open: Its fields are as follows: Metamodel URI: the URI of the metamodel that has the type we want. Type Name: the name of the type (here \"Class\"). Attribute Name: the name of the attribute to be indexed (here \"name\"). Please allow some time after the dialog is closed to have Hawk generate the index. Currently, Hawk can index attributes with strings, booleans and numbers. Indexing will speed up not only = , but also > and all the other relational operators.","title":"Managing indexed attributes"},{"location":"basic-use/core-concepts/#managing-derived-attributes","text":"Sometimes we will need to filter model elements through a piece of information that is not directly stored among its attributes, but is rather computed from them. To speed up the process, Hawk can keep precompute these derived attributes in the graph, keeping them up to date and indexing them. For instance, if we wanted to quickly filter UML classes by their number of operations, we would go to the Hawk configure dialog, select the \"Derived Attributes\" tab and click on the \"Add\" button. This dialog would appear: The fields are as follows: Metamodel URI: the URI of the metamodel with the type to be extended. Type Name: the name of the type we are going to extend. Attribute Name: the name of the new derived attribute (should be unique). Attribute Type: the type of the new derived attribute. isMany: true if this is a collection of values, false otherwise. isOrdered: true if this is an ordered collection of values, false otherwise. isUnique: true if the value should provide a unique identifier, false otherwise. Derivation Language: query language that the derivation logic will be written on. EOL is the default choice. Derivation Logic: expression in the chosen language that will compute the value. Hawk provides the self variable to access the model element being extended. For this particular example, we'd set the fields like this: Metamodel URI: the URI of the UML metamodel. Type Name: Class. Attribute Name: ownedOperationCount. Attribute Type: Integer. isMany, isOrdered, isUnique: false. Derivation Language: EOLQueryEngine. Derivation Logic: return self.ownedOperation.size; . After pressing OK, Hawk will spend some time computing the derived attribute and indexing the value. After that, queries such as return Class.all.select(c|c.ownedOperationCount > 20); will complete much faster.","title":"Managing derived attributes"},{"location":"basic-use/core-concepts/#querying-the-graph","text":"To query the indexed models, use the \"Query\" button of the Hawk view. This dialog will open: The actual query can be entered through the \"Query\" field manually, or loaded from a file using the \"Query File\" button. The query should be written in the language selected in \"Query Engine\". The scope of the query can be limited using the \"Context Repositories\" and \"Context Files\" fields: for instance, using set1.xmi on the \"Context Files\" field would limit it to the contents of the set1.xmi file. Running the query with \"Run Query\" button will place the results on the \"Result\" field.","title":"Querying the graph"},{"location":"basic-use/examples-modelio/","text":"Example queries on Modelio models \u00b6 This article shows several example queries on Modelio projects. The Modelio model driver does not use the XMI export in Modelio: instead, it parses .exml files directly (which might be contained in .ramc files) and understands metamodels described in Modelio metamodel_descriptor.xml files. (To obtain one, download the source code for your Modelio version and search within it. Here is a copy of the one used for Modelio 3.6.) All the queries are written in the Epsilon Object Language , and assume that the toy Zoo Modelio project has been indexed. The queries are based on those in [[the XMI-based UML examples page|Example queries on XMI based UML models]]. The underlying UML model looks like this: To avoid ambiguity in type names, the default namespaces list in the query dialog should include modelio://uml::statik . All instances of a type \u00b6 Returns the number of instances of \"Class\" in the index: return Class . all . size ; Metamodel URI for the \"Class\" type \u00b6 Returns the URI of the metamodel that contains the \"Class\" type ( modelio://uml::statik ): return Model . types . selectOne ( t | t . name = 'Class' ). metamodel . uri ; Reference slots in a type \u00b6 Returns the reference slots in the type \"Class\": return Model . types . select ( t | t . name = 'Class' ). references ; Reference traversal \u00b6 Returns the superclass of \"Zebra\" by navigating the \"Parent\" and \"SuperType\" associations present in the Modelio metamodel: return Class . all . selectOne ( c | c . Name = 'Zebra' ) . Parent . SuperType . Name ; Reverse reference traversal \u00b6 Returns the subclasses of \"Animal\", using the revRefNav_ to navigate references in reverse: return Class . all . selectOne ( c | c . Name = 'Animal' ) . revRefNav_SuperType . revRefNav_Parent . Name ; Range queries with indexed or derived integer attributes \u00b6 This example requires adding a derived attribute first: Metamodel URI: modelio://uml::statik Type Name: Class Attribute Name: ownedOperationCount Attribute Type: Integer isMany, isOrdered, isUnique: false Derivation Language: EOLQueryEngine Derivation Logic: return self.OwnedOperation.size; After it has been added, this query will return the classes that have one or more operations: return Class . all . select ( c | c . ownedOperationCount > 0 ). Name ; Advanced example: loops, variables and custom operations \u00b6 This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own: var counts = Sequence {}; var i = 0 ; var n = count ( 0 ); while ( n > 0 ) { counts . add ( Sequence { \">\" + i , n }); i = i + 1 ; n = count ( i ); } return counts ; operation count ( n ) { return Class . all . select ( c | c . ownedOperationCount > n ). size ; }","title":"Examples (Modelio)"},{"location":"basic-use/examples-modelio/#example-queries-on-modelio-models","text":"This article shows several example queries on Modelio projects. The Modelio model driver does not use the XMI export in Modelio: instead, it parses .exml files directly (which might be contained in .ramc files) and understands metamodels described in Modelio metamodel_descriptor.xml files. (To obtain one, download the source code for your Modelio version and search within it. Here is a copy of the one used for Modelio 3.6.) All the queries are written in the Epsilon Object Language , and assume that the toy Zoo Modelio project has been indexed. The queries are based on those in [[the XMI-based UML examples page|Example queries on XMI based UML models]]. The underlying UML model looks like this: To avoid ambiguity in type names, the default namespaces list in the query dialog should include modelio://uml::statik .","title":"Example queries on Modelio models"},{"location":"basic-use/examples-modelio/#all-instances-of-a-type","text":"Returns the number of instances of \"Class\" in the index: return Class . all . size ;","title":"All instances of a type"},{"location":"basic-use/examples-modelio/#metamodel-uri-for-the-class-type","text":"Returns the URI of the metamodel that contains the \"Class\" type ( modelio://uml::statik ): return Model . types . selectOne ( t | t . name = 'Class' ). metamodel . uri ;","title":"Metamodel URI for the \"Class\" type"},{"location":"basic-use/examples-modelio/#reference-slots-in-a-type","text":"Returns the reference slots in the type \"Class\": return Model . types . select ( t | t . name = 'Class' ). references ;","title":"Reference slots in a type"},{"location":"basic-use/examples-modelio/#reference-traversal","text":"Returns the superclass of \"Zebra\" by navigating the \"Parent\" and \"SuperType\" associations present in the Modelio metamodel: return Class . all . selectOne ( c | c . Name = 'Zebra' ) . Parent . SuperType . Name ;","title":"Reference traversal"},{"location":"basic-use/examples-modelio/#reverse-reference-traversal","text":"Returns the subclasses of \"Animal\", using the revRefNav_ to navigate references in reverse: return Class . all . selectOne ( c | c . Name = 'Animal' ) . revRefNav_SuperType . revRefNav_Parent . Name ;","title":"Reverse reference traversal"},{"location":"basic-use/examples-modelio/#range-queries-with-indexed-or-derived-integer-attributes","text":"This example requires adding a derived attribute first: Metamodel URI: modelio://uml::statik Type Name: Class Attribute Name: ownedOperationCount Attribute Type: Integer isMany, isOrdered, isUnique: false Derivation Language: EOLQueryEngine Derivation Logic: return self.OwnedOperation.size; After it has been added, this query will return the classes that have one or more operations: return Class . all . select ( c | c . ownedOperationCount > 0 ). Name ;","title":"Range queries with indexed or derived integer attributes"},{"location":"basic-use/examples-modelio/#advanced-example-loops-variables-and-custom-operations","text":"This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own: var counts = Sequence {}; var i = 0 ; var n = count ( 0 ); while ( n > 0 ) { counts . add ( Sequence { \">\" + i , n }); i = i + 1 ; n = count ( i ); } return counts ; operation count ( n ) { return Class . all . select ( c | c . ownedOperationCount > n ). size ; }","title":"Advanced example: loops, variables and custom operations"},{"location":"basic-use/examples-xmi/","text":"Example queries on XMI models \u00b6 These are some sample queries that can be done on any set of indexed XMI-based UML models, assuming that Class::name has been added as an indexed attribute and Class::ownedOperationCount has been defined as a derived attribute (as showed in [[Basic concepts and usage]]). All the queries are written in the Epsilon Object Language . In order to index XMI-based UML models, you only need to enable the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when you create a new Hawk instance, and ensure your files have the .uml extension. If you are using any predefined UML data types, you may also want to add a PredefinedUMLLibraries location inside \"Indexed Locations\": that will integrate those predefined objects into the Hawk graph, allowing you to reference them on queries. The rest of this article will run on this toy XMI-based UML file , which was exported from this Modelio 3.2.1 project : To avoid ambiguity in type names, the default namespaces list in the query dialog should include the UML metamodel URI ( http://www.eclipse.org/uml2/5.0.0/UML for the above UML.ecore file). All instances of a type \u00b6 return Class.all.size; Returns the total number of classes within the specified scope. If you leave \"Context Files\" empty, it'll count all the classes in all the projects. If you put \"*OSS.modelio.zip\" in \"Context Files\", it'll count only the classes within the OSS project. This is faster than going through the model because we can go to the Class node and then simply count all the incoming edges with label \"ofType\". Reference slots in a type \u00b6 return Model.types.select(t|t.name='Class').references; Gives you all the reference slots in the UML \"Class\" type. This is an example of the queries that can be performed at the \"meta\" level: more details are available in [[Meta level queries in Hawk]]. The query dialog with the result would look like this: Reference traversal \u00b6 return Class.all .select(c|c.qualifiedName='zoo::Zebra') .superClass.flatten.name; Gives you the names of all the superclasses of class Zebra within model zoo . Reverse reference traversal \u00b6 return Class.all .select(c|c.qualifiedName='zoo::Animal') .revRefNav_superClass.flatten.name; Gives the names of all the sub classes of Animal (follows \"superClass\" in reverse). The UML metamodel doesn't have \"subclass\" links, but we can use Hawk's automatic support for reverse traversal of references. In general, if x.e is a reference, we can follow it in reverse with x.revRefNav_e . We can also access containers using x.eContainer . Range queries with indexed or derived integer attributes \u00b6 return Class.all.select(c|c.ownedOperationCount > 0).name; Finds the names of the classes with at least one operation of their own. Advanced example: loops, variables and custom operations \u00b6 var counts = Sequence {}; var i = 0; var n = count(0); while (n > 0) { counts.add(Sequence {\">\" + i, n}); i = i + 1; n = count(i); } return counts; operation count(n) { return Class.all.select(c|c.ownedOperationCount > n).size; } This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own.","title":"Examples (XMI)"},{"location":"basic-use/examples-xmi/#example-queries-on-xmi-models","text":"These are some sample queries that can be done on any set of indexed XMI-based UML models, assuming that Class::name has been added as an indexed attribute and Class::ownedOperationCount has been defined as a derived attribute (as showed in [[Basic concepts and usage]]). All the queries are written in the Epsilon Object Language . In order to index XMI-based UML models, you only need to enable the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when you create a new Hawk instance, and ensure your files have the .uml extension. If you are using any predefined UML data types, you may also want to add a PredefinedUMLLibraries location inside \"Indexed Locations\": that will integrate those predefined objects into the Hawk graph, allowing you to reference them on queries. The rest of this article will run on this toy XMI-based UML file , which was exported from this Modelio 3.2.1 project : To avoid ambiguity in type names, the default namespaces list in the query dialog should include the UML metamodel URI ( http://www.eclipse.org/uml2/5.0.0/UML for the above UML.ecore file).","title":"Example queries on XMI models"},{"location":"basic-use/examples-xmi/#all-instances-of-a-type","text":"return Class.all.size; Returns the total number of classes within the specified scope. If you leave \"Context Files\" empty, it'll count all the classes in all the projects. If you put \"*OSS.modelio.zip\" in \"Context Files\", it'll count only the classes within the OSS project. This is faster than going through the model because we can go to the Class node and then simply count all the incoming edges with label \"ofType\".","title":"All instances of a type"},{"location":"basic-use/examples-xmi/#reference-slots-in-a-type","text":"return Model.types.select(t|t.name='Class').references; Gives you all the reference slots in the UML \"Class\" type. This is an example of the queries that can be performed at the \"meta\" level: more details are available in [[Meta level queries in Hawk]]. The query dialog with the result would look like this:","title":"Reference slots in a type"},{"location":"basic-use/examples-xmi/#reference-traversal","text":"return Class.all .select(c|c.qualifiedName='zoo::Zebra') .superClass.flatten.name; Gives you the names of all the superclasses of class Zebra within model zoo .","title":"Reference traversal"},{"location":"basic-use/examples-xmi/#reverse-reference-traversal","text":"return Class.all .select(c|c.qualifiedName='zoo::Animal') .revRefNav_superClass.flatten.name; Gives the names of all the sub classes of Animal (follows \"superClass\" in reverse). The UML metamodel doesn't have \"subclass\" links, but we can use Hawk's automatic support for reverse traversal of references. In general, if x.e is a reference, we can follow it in reverse with x.revRefNav_e . We can also access containers using x.eContainer .","title":"Reverse reference traversal"},{"location":"basic-use/examples-xmi/#range-queries-with-indexed-or-derived-integer-attributes","text":"return Class.all.select(c|c.ownedOperationCount > 0).name; Finds the names of the classes with at least one operation of their own.","title":"Range queries with indexed or derived integer attributes"},{"location":"basic-use/examples-xmi/#advanced-example-loops-variables-and-custom-operations","text":"var counts = Sequence {}; var i = 0; var n = count(0); while (n > 0) { counts.add(Sequence {\">\" + i, n}); i = i + 1; n = count(i); } return counts; operation count(n) { return Class.all.select(c|c.ownedOperationCount > n).size; } This query produces a sequence of >x, y pairs which indicate that y classes have more than x operations of their own.","title":"Advanced example: loops, variables and custom operations"},{"location":"basic-use/installation/","text":"Hawk can be used as a regular Java library (to be embedded within another Java program) or as a set of plugins for the Eclipse IDE. To install most of Hawk's Eclipse plugins, point your installation to this update site, which is kept up to date automatically using Travis: https://download.eclipse.org/hawk/2.1.0/updates/ This is a composite update site, which contains not only Hawk, but also its dependencies. Simply check all the categories that start with \"Hawk\". Some of the components in Hawk cannot be redistributed in binary form due to incompatible licenses. You will need to build the update site for these restricted components yourself: please consult the developer resources in the wiki to do that.","title":"Installation"},{"location":"basic-use/papyrus/","text":"Hawk includes specific support for MDT2 UML models and UML profiles developed using Papyrus UML. This can be used by enabling the UMLMetaModelResourceFactory and UMLModelResourceFactory plugins when creating a Hawk instance. The implementation mostly reuses MDT UML2 and Papyrus UML as-is, in order to maximize compatibility. There are some minor caveats, which are documented in this page. Supported file extensions \u00b6 Hawk indexes plain UML2 models with the .uml extension, and Papyrus profiles with the .profile.uml extension. It does not index .di nor .notation files at the moment, as these do not provide semantic information. .xmi files are not indexed by the Hawk UML components, to avoid conflicts with the plain EMF support (matching the file to the proper model resource is done strictly by file extension). You are recommended to rename your UML2 XMI files to .uml for now. Predefined UML packages \u00b6 UML2 provides an implementation of the UML standard libraries, with packages containing some common datatypes (e.g. String or Integer). If your models use any of these libraries, we heavily recommend that you add a PredefinedUMLLibraries component in your \"Indexed Locations\" section. Otherwise, any references from your models to the libraries will be left unresolved, and you will not be able to use those predefined entities in your queries. This is because Hawk operates normally on files, and the predefined UML libraries are generally bundled within the UML2 plugins. The PredefinedUMLLibraries exposes those bundled resources to Hawk in a way that is transparent to the querying language. Multi-version Papyrus UML profile support \u00b6 Beyond registering all the metamodels required to index plain UML models, the UML metamodel resource factory in Hawk can register .profile.uml files as metamodels. This allows us to index UML models with custom profiles in Hawk. Since UML profiles can be versioned, Hawk will register version X.Y.Z of profile with URI http://your/profile with http://your/profile/X.Y.Z as the URI. When querying with Hawk, you will have to specify http://your/profile/X.Y.Z in your default namespaces, in order to resolve the ambiguity that may exist between multiple versions of the same metamodel. If a new version of the UML profile is created, you will need to register the .profile.uml file again with Hawk before it can index models that use that version of the profile. Hawk treats entities of different versions of the same profile as entirely different types. In terms of implementation details, Hawk takes advantage of the fact that .profile.uml files contain a collection of Ecore EPackages . Hawk simply adds the /X.Y.Z version suffix to their namespace URI, and otherwise leaves them untouched. Example: using Hawk to index all UML models in an Eclipse workspace \u00b6 We will show how Hawk can be used to index all the UML models in an Eclipse workspace, including those that have custom profiles applied to them. To illustrate our approach, we will use these toy models created with Papyrus. We assume that you have installed Hawk into your Eclipse instance, following the steps in [[this wiki page|Installation]]. Models \u00b6 The model is a very simple UML class diagram: It only has two classes, one of which has the <<Special>> stereotype with a priority property equal to 23. This value is not shown in the diagram, but it can be checked from the \"Profile\" page of the \"Properties\" view when the class is selected. The profile including the <<Special>> stereotype is also very simple: The diagram imports the Class UML metaclass, and then extends it with the <<Special>> stereotype. Creating the Hawk index \u00b6 Before we can run any queries, we need to create a Hawk index. If we have installed Hawk correctly, we will be able to open the \"Hawk\" view and see something like this: Right now, we have no indexes in Hawk. We need to press the \"Add\" button, which is highlighted in red above. We should see a dialog similar to this: Important points: We can pick any name we want, as long as it is unique. Instance type should be a LocalHawkFactory if we intend to index our workspace. The Local storage folder will contain some of the configuration of that Hawk instance, and the database. Remote location is irrelevant when using the LocalHawkFactory . If we are only interested in indexing the UML models in the workspace, it is a good idea to Disable all the plugins and then check only the UML metamodel and model resource factories. You can choose to use Neo4j (if you [[build it on your own|Running from source]]), OrientDB, or any other backend we may support in the future. Min/Max Delay indicate how often will Hawk poll all the indexed locations. If you are only indexing the current workspace, you can leave both at 0 to disable polling: regardless of this setting, Hawk will react automatically whenever something in the workspace changes. Once the index has been created, you should see an entry for it in the \"Hawk\" view: Adding metamodels and models \u00b6 From the screenshot above, we know that the index is RUNNING (available for queries) and not UPDATING nor STOPPED , so we can start configuring it as we need. First, we should double click on it to open the configuration dialog: We should go to the \"Metamodels\" tab and click on \"Add...\", then select the specialThings.profile/model.profile.uml file. Hawk will register our custom profile as a metamodel, and we will be ready to index models using all the versions of this profile so far. Should we define any newer versions, we will have to add the file again to Hawk. The dialog will now list the new metamodel: Now we are ready to add the locations where the models to be indexed are stored. We go to the \"Indexed Locations\" tab and click on \"Add\". First, we will add the predefined UML libraries with some commonly used instances (e.g. UML data types): We need to pick the right \"Type\", and then enter / in the \"Location\" field. The location is ignored for this repository, but due to current limitations in the UI we have to enter something in the field. Next, we have to tell Hawk to index all the models in the workspace. We will \"Add\" another location, and this time fill the dialog like this: Again, the / \"Location\" is irrelevant but required by the UI. Hawk will spend some time UPDATING , and once it is RUNNING again we will be ready to run some queries on it. Querying Hawk \u00b6 We can finally query Hawk now. To do so, we need to select our index on the \"Hawk\" view and click on the \"Query\" button, which looks like a magnifying glass: We will see a dialog like this one, with all fields empty: Enter the query return Class.all.name; and click on the \"Run Query\" button. This query lists the names of all the classes indexed so far by Hawk. You will notice that we obtain these results: [E, T, MyClass, Special, V, NotSoSpecial, Stereotype1, K, E] The E/T/V/K/E classes came from the predefined UML libraries. If you want only the results from your workspace, you must tell Hawk through the \"Context Repositories\" field, by entering platform:/resource . This is the base URI used by Hawk to identify all the files in your workspace. Click on \"Run Query\" again, and you should obtain the results shown in the screenshot: [MyClass, Stereotype1, Special, NotSoSpecial] Note how the query also returns the classes in the profile. Should you want to avoid this, you can either use the \"Context Files\" field ( *model.uml will do this) to further restrict the scope of the query. Finding UML objects by stereotype \u00b6 If you would like to find all applications of stereotype X , you can simply use X.all and then use base_Metaclass to find the object that was annotated with that stereotype. For instance, this query will find the name of all the classes that had the <<Special>> stereotype applied to them: return Special.all.base_Class.name; You will get: [MyClass] You can also access stereotype properties: return Special.all.collect(s| Sequence { s.priority, s.base_Class.name } ).asSequence; This will produce: [[23, MyClass]] Finding stereotype applications from the UML object \u00b6 If you want to go the other way around, you can use reverse reference navigation on those base_X references to find the stereotypes that have been applied to a UML object: return Class.all .selectOne(s|s.name = 'MyClass') .revRefNav_base_Class .collect(st|Model.getTypeOf(st)) .name; This would produce: [Special]","title":"Papyrus UML support"},{"location":"basic-use/papyrus/#supported-file-extensions","text":"Hawk indexes plain UML2 models with the .uml extension, and Papyrus profiles with the .profile.uml extension. It does not index .di nor .notation files at the moment, as these do not provide semantic information. .xmi files are not indexed by the Hawk UML components, to avoid conflicts with the plain EMF support (matching the file to the proper model resource is done strictly by file extension). You are recommended to rename your UML2 XMI files to .uml for now.","title":"Supported file extensions"},{"location":"basic-use/papyrus/#predefined-uml-packages","text":"UML2 provides an implementation of the UML standard libraries, with packages containing some common datatypes (e.g. String or Integer). If your models use any of these libraries, we heavily recommend that you add a PredefinedUMLLibraries component in your \"Indexed Locations\" section. Otherwise, any references from your models to the libraries will be left unresolved, and you will not be able to use those predefined entities in your queries. This is because Hawk operates normally on files, and the predefined UML libraries are generally bundled within the UML2 plugins. The PredefinedUMLLibraries exposes those bundled resources to Hawk in a way that is transparent to the querying language.","title":"Predefined UML packages"},{"location":"basic-use/papyrus/#multi-version-papyrus-uml-profile-support","text":"Beyond registering all the metamodels required to index plain UML models, the UML metamodel resource factory in Hawk can register .profile.uml files as metamodels. This allows us to index UML models with custom profiles in Hawk. Since UML profiles can be versioned, Hawk will register version X.Y.Z of profile with URI http://your/profile with http://your/profile/X.Y.Z as the URI. When querying with Hawk, you will have to specify http://your/profile/X.Y.Z in your default namespaces, in order to resolve the ambiguity that may exist between multiple versions of the same metamodel. If a new version of the UML profile is created, you will need to register the .profile.uml file again with Hawk before it can index models that use that version of the profile. Hawk treats entities of different versions of the same profile as entirely different types. In terms of implementation details, Hawk takes advantage of the fact that .profile.uml files contain a collection of Ecore EPackages . Hawk simply adds the /X.Y.Z version suffix to their namespace URI, and otherwise leaves them untouched.","title":"Multi-version Papyrus UML profile support"},{"location":"basic-use/papyrus/#example-using-hawk-to-index-all-uml-models-in-an-eclipse-workspace","text":"We will show how Hawk can be used to index all the UML models in an Eclipse workspace, including those that have custom profiles applied to them. To illustrate our approach, we will use these toy models created with Papyrus. We assume that you have installed Hawk into your Eclipse instance, following the steps in [[this wiki page|Installation]].","title":"Example: using Hawk to index all UML models in an Eclipse workspace"},{"location":"basic-use/papyrus/#models","text":"The model is a very simple UML class diagram: It only has two classes, one of which has the <<Special>> stereotype with a priority property equal to 23. This value is not shown in the diagram, but it can be checked from the \"Profile\" page of the \"Properties\" view when the class is selected. The profile including the <<Special>> stereotype is also very simple: The diagram imports the Class UML metaclass, and then extends it with the <<Special>> stereotype.","title":"Models"},{"location":"basic-use/papyrus/#creating-the-hawk-index","text":"Before we can run any queries, we need to create a Hawk index. If we have installed Hawk correctly, we will be able to open the \"Hawk\" view and see something like this: Right now, we have no indexes in Hawk. We need to press the \"Add\" button, which is highlighted in red above. We should see a dialog similar to this: Important points: We can pick any name we want, as long as it is unique. Instance type should be a LocalHawkFactory if we intend to index our workspace. The Local storage folder will contain some of the configuration of that Hawk instance, and the database. Remote location is irrelevant when using the LocalHawkFactory . If we are only interested in indexing the UML models in the workspace, it is a good idea to Disable all the plugins and then check only the UML metamodel and model resource factories. You can choose to use Neo4j (if you [[build it on your own|Running from source]]), OrientDB, or any other backend we may support in the future. Min/Max Delay indicate how often will Hawk poll all the indexed locations. If you are only indexing the current workspace, you can leave both at 0 to disable polling: regardless of this setting, Hawk will react automatically whenever something in the workspace changes. Once the index has been created, you should see an entry for it in the \"Hawk\" view:","title":"Creating the Hawk index"},{"location":"basic-use/papyrus/#adding-metamodels-and-models","text":"From the screenshot above, we know that the index is RUNNING (available for queries) and not UPDATING nor STOPPED , so we can start configuring it as we need. First, we should double click on it to open the configuration dialog: We should go to the \"Metamodels\" tab and click on \"Add...\", then select the specialThings.profile/model.profile.uml file. Hawk will register our custom profile as a metamodel, and we will be ready to index models using all the versions of this profile so far. Should we define any newer versions, we will have to add the file again to Hawk. The dialog will now list the new metamodel: Now we are ready to add the locations where the models to be indexed are stored. We go to the \"Indexed Locations\" tab and click on \"Add\". First, we will add the predefined UML libraries with some commonly used instances (e.g. UML data types): We need to pick the right \"Type\", and then enter / in the \"Location\" field. The location is ignored for this repository, but due to current limitations in the UI we have to enter something in the field. Next, we have to tell Hawk to index all the models in the workspace. We will \"Add\" another location, and this time fill the dialog like this: Again, the / \"Location\" is irrelevant but required by the UI. Hawk will spend some time UPDATING , and once it is RUNNING again we will be ready to run some queries on it.","title":"Adding metamodels and models"},{"location":"basic-use/papyrus/#querying-hawk","text":"We can finally query Hawk now. To do so, we need to select our index on the \"Hawk\" view and click on the \"Query\" button, which looks like a magnifying glass: We will see a dialog like this one, with all fields empty: Enter the query return Class.all.name; and click on the \"Run Query\" button. This query lists the names of all the classes indexed so far by Hawk. You will notice that we obtain these results: [E, T, MyClass, Special, V, NotSoSpecial, Stereotype1, K, E] The E/T/V/K/E classes came from the predefined UML libraries. If you want only the results from your workspace, you must tell Hawk through the \"Context Repositories\" field, by entering platform:/resource . This is the base URI used by Hawk to identify all the files in your workspace. Click on \"Run Query\" again, and you should obtain the results shown in the screenshot: [MyClass, Stereotype1, Special, NotSoSpecial] Note how the query also returns the classes in the profile. Should you want to avoid this, you can either use the \"Context Files\" field ( *model.uml will do this) to further restrict the scope of the query.","title":"Querying Hawk"},{"location":"basic-use/papyrus/#finding-uml-objects-by-stereotype","text":"If you would like to find all applications of stereotype X , you can simply use X.all and then use base_Metaclass to find the object that was annotated with that stereotype. For instance, this query will find the name of all the classes that had the <<Special>> stereotype applied to them: return Special.all.base_Class.name; You will get: [MyClass] You can also access stereotype properties: return Special.all.collect(s| Sequence { s.priority, s.base_Class.name } ).asSequence; This will produce: [[23, MyClass]]","title":"Finding UML objects by stereotype"},{"location":"basic-use/papyrus/#finding-stereotype-applications-from-the-uml-object","text":"If you want to go the other way around, you can use reverse reference navigation on those base_X references to find the stereotypes that have been applied to a UML object: return Class.all .selectOne(s|s.name = 'MyClass') .revRefNav_base_Class .collect(st|Model.getTypeOf(st)) .name; This would produce: [Special]","title":"Finding stereotype applications from the UML object"},{"location":"developers/plain-maven/","text":"Hawk can be reused as a library in a regular Java application, outside OSGi. Non-OSGi developers normally use Maven or a Maven-compatible build system (e.g. Ivy or SBT), rather than Tycho. To make it easier for these developers, Hawk provides a parallel hierarchy of pom-plain.xml files that can be used to build Hawk with plain Maven ( pom.xml files are reserved for Tycho). Not all Hawk modules are available through this build, as they may rely on OSGi (e.g. org.eclipse.hawk.modelio ) or require downloading many external dependencies (e.g. org.eclipse.hawk.bpmn ). .feature , .dependencies and .tests projects are not included either, as they are OSGi-specific. For that reason, this build should only be used for distribution, and not for regular development. To build with regular Maven, run mvn -f pom-plain.xml install from the root of the repository to compile the artifacts and install them into the local Maven repository, so they can be used in other Maven builds.","title":"Build with plain Maven"},{"location":"developers/run-from-source/","text":"These instructions are from a clean download of an Eclipse Luna Modelling distribution and include all optional dependencies. Clone this Git repository on your Eclipse instance (e.g. using git clone or EGit) and import all projects into the workspace (File > Import > Existing Projects into Workspace). Open the org.hawk.targetplatform/org.hawk.targetplatform.target file, wait for the target definition to be resolved and click on Set as Target Platform . Install IvyDE into your Eclipse instance, right click on org.hawk.neo4j-v2.dependencies and use \"Ivy > Retrieve 'dependencies'\". The libraries should appear within Referenced Libraries . Do the same for these other projects: org.hawk.orientdb org.hawk.localfolder org.hawk.greycat Force a full rebuild with Project > Clean... > Clean all projects if you still have errors. After all these steps, you should have a working version of Hawk with all optional dependencies and no errors. You can now use \"Run as... > Eclipse Application\" to open a nested Eclipse with the Hawk GUI running inside it.","title":"Run GUI from source"},{"location":"developers/server-from-source/","text":"In order to run the server products from the sources, you need to first install the basic steps for running Hawk from source. Once you have done that, to run the server product, you should open the relevant .product file. The editor will look like this one: You should use one of the buttons highlighted in red (the triangle \"Run\" button or the bug-like \"Debug\" button) to run the product for the first time. It may fail, due to the slightly buggy way in which Eclipse produces the launch configuration from the product. If you see this: !ENTRY org.eclipse.osgi 4 0 2017-04-15 13:51:14.444 !MESSAGE Application error !STACK 1 java.lang.RuntimeException: No application id has been found. That means you need to tweak the launch configuration a bit. Shutdown the server by entering shutdown and then close in the \"Console\" view, and then open the \"Run\" menu and select \"Run Configurations...\". Select the relevant \"Eclipse Application\" launch configuration and go to the \"Plug-ins\" section: Click on \"Add Required Plugins\": you'll notice that it adds quite a few things. Click on \"Run\" now: it should work fine. Eventually, you should see this text: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. You are done! You can also use \"Debug\" to track bugs in the server itself. Note : if you would like to make changes to the Thrift API, you will need to edit the api.emf Emfatic file in the service.api project, and then regenerate the api.thrift file by using Ecore2Thrift . After that, you will need to run the Thrift code generator through the generate.sh script in the root of the same project.","title":"Run Server from source"},{"location":"developers/website/","text":"The website for Eclipse Hawk is written in MkDocs . The website repository is available here: https://git.eclipse.org/c/www.eclipse.org/hawk.git/ To work on the website, clone it with your Eclipse credentials, and follow the instructions in the included README.md file.","title":"Work on the website"},{"location":"server/api-security/","text":"In some cases, we may want to protect the API from unaccounted use, as clients would have access to potentially sensitive information. In order to provide this access control, the Apache Shiro library has been integrated transparently as a filter for all incoming requests to the endpoints under /thrift . /thrift-local endpoints are not password-protected, as they only answer requests from other processes in the machine hosting the MONDO Server. Apache Shiro protects these /thrift endpoints using standard HTTP Basic authentication, which is transparent to Thrift, avoiding the need to pollute the web API with access tokens in every single method. Industrial partners will be instructed to always use the authentication layer in combination with SSL, since HTTP Basic by itself is insecure. One important advantage of Shiro is its configurability through a single .ini file, like this one: [main] # Objects and their properties are defined here, # Such as the securityManager, Realms and anything # else needed to build the SecurityManager # Note: this should be set to true in production! ssl.enabled = true # Toggle to enable/disable authentication completely authcBasic.enabled = true # Use Hawk realm mondoRealm = uk.ac.york.mondo.integration.server.users.servlet.shiro.UsersRealm securityManager.realms = $mondoRealm # We\u2019re using SHA\u2212512 for passwords, with 10k iterations credentialsMatcher = org.apache.shiro.authc.credential.Sha512CredentialsMatcher credentialsMatcher.hashIterations = 10000 mondoRealm.credentialsMatcher = $credentialsMatcher [urls] /thrift/** = ssl, authcBasic Shiro is heavily componentized, making it easy to provide alternative implementations of certain pieces and reuse the default implementations for the rest. In the shown example, all requests to the /thrift endpoints go through the default ssl and authcBasic filters: when enabled, these filters enforce the use of SSL and HTTP Basic authentication respectively. Both filters should be enabled in production environments. For the HTTP Basic authentication, the server provides its own implementation of a Shiro security realm, which is dedicated to storing and retrieving user details. The security realm uses an embedded MapDB database to persist these user details, which are managed through the Users service (Section 5.2.4). An embedded database was used in order to prevent end users from having to set up a database just to store a small set of users. MapDB is distributed as a single .jar file, making it very simple to integrate. In any case, the realm could be replaced with another one if desired by editing shiro.ini on an installation. Passwords for the MONDO realm are stored in a hashed and salted form, using 10000 iterations of SHA-512 and a random per-password salt. As for the client side, the command-line based clients accept optional arguments for the required credentials when connecting to the Thrift endpoints. If the password is omitted, the command-line based clients will require it in a separate \"silent\" prompt that does not show the characters that are typed, preventing shoulder surfing attacks. Due to limitations in the Eclipse graphical user interface, these silent prompts are only available when running the command-line based clients from a proper terminal window and not from the Eclipse \"Console\" view. The graphical clients connect to the Thrift endpoints using \u201clazy\u201d credential providers: if authentication is required, they will attempt to retrieve previously used credentials from the Eclipse secure store and if no such credentials exist, they will show an authentication dialog asking for the username and password to be used. The Eclipse secure storage takes advantage of the access control and encryption capabilities of the underlying operating system as much as possible, and makes it possible to store passwords safely and conveniently. These stored MONDO server credentials can be managed from the \"Hawk Servers\" preference page. Regarding the Artemis messaging queue, it has been secured with the same Shiro realm as the Thrift endpoints. The remote Hawk EMF abstraction (the only component that uses Artemis within the MONDO platform) will connect to Artemis with the same credentials that were used to connect to Thrift, if authentication was required.","title":"Thrift API security"},{"location":"server/api/","text":"Services \u00b6 Hawk \u00b6 The following service operations expose the capabilities of the Hawk heterogeneous model indexing framework. Hawk.createInstance \u00b6 Creates a new Hawk instance (stopped). Returns void . Takes these parameters: Name Type Documentation name string The unique name of the new Hawk instance. backend string The name of the backend to be used, as returned by listBackends(). minimumDelayMillis i32 Minimum delay between periodic synchronization in milliseconds. maximumDelayMillis i32 Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization). enabledPlugins (optional) list List of plugins to be enabled: if not set, all plugins are enabled. Hawk.listBackends \u00b6 Lists the names of the available storage backends. Returns list<string> . Does not take any parameters. Hawk.listPlugins \u00b6 Lists all the Hawk plugins that can be enabled or disabled: metamodel parsers, model parsers and graph change listeners. Returns list<string> . Does not take any parameters. Hawk.listInstances \u00b6 Lists the details of all Hawk instances. Returns list<HawkInstance> . Does not take any parameters. Hawk.removeInstance \u00b6 Removes an existing Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to remove. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. Hawk.startInstance \u00b6 Starts a stopped Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to start. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. Hawk.stopInstance \u00b6 Stops a running Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.syncInstance \u00b6 Forces an immediate synchronization on a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. blockUntilDone (optional) bool If true, blocks the call until the synchronisation completes. False by default. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.registerMetamodels \u00b6 Registers a set of file-based metamodels with a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The metamodels to register. More than one metamodel file can be provided in one request, to accomodate fragmented metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. InvalidMetamodel The provided metamodel is not valid (e.g. unparsable or inconsistent). HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.unregisterMetamodels \u00b6 Unregisters a metamodel from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The URIs of the metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listMetamodels \u00b6 Lists the URIs of the registered metamodels of a Hawk instance. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listQueryLanguages \u00b6 Lists the supported query languages and their status. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. Hawk.query \u00b6 Runs a query on a Hawk instance and returns a sequence of scalar values and/or model elements. Returns QueryResult . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. query string The query to be executed. language string The name of the query language used (e.g. EOL, OCL). options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownQueryLanguage The specified query language is not supported by the operation. InvalidQuery The specified query is not valid. FailedQuery The specified query failed to complete its execution. Hawk.resolveProxies \u00b6 Returns populated model elements for the provided proxies. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. ids list Proxy model element IDs to be resolved. options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.addRepository \u00b6 Asks a Hawk instance to start monitoring a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repo Repository The repository to monitor. credentials (optional) Credentials A valid set of credentials that has read-access to the repository. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownRepositoryType The specified repository type is not supported by the operation. VCSAuthenticationFailed The client failed to prove its identity in the VCS. Hawk.isFrozen \u00b6 Returns true if a repository is frozen, false otherwise. Returns bool . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.setFrozen \u00b6 Changes the 'frozen' state of a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to be changed. isFrozen bool Whether the repository should be frozen (true) or not (false). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.removeRepository \u00b6 Asks a Hawk instance to stop monitoring a repository and remove its elements from the graph. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to stop monitoring. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.updateRepositoryCredentials \u00b6 Changes the credentials used to monitor a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to update. cred Credentials The new credentials to be used. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listRepositories \u00b6 Lists the repositories monitored by a Hawk instance. Returns list<Repository> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listRepositoryTypes \u00b6 Lists the available repository types in this installation. Returns list<string> . Does not take any parameters. Hawk.listFiles \u00b6 Lists the paths of the files of the indexed repository. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repository list The URI of the indexed repository. filePatterns list File name patterns to search for (* lists all files). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.configurePolling \u00b6 Sets the base polling period and max interval of a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. base i32 The base polling period (in seconds). max i32 The maximum polling interval (in seconds). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidPollingConfiguration The polling configuration is not valid. Hawk.addDerivedAttribute \u00b6 Add a new derived attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the new derived attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidDerivedAttributeSpec The derived attribute specification is not valid. Hawk.removeDerivedAttribute \u00b6 Remove a derived attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the derived attribute to be removed. Only the first three fields of the spec need to be populated. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listDerivedAttributes \u00b6 Lists the derived attributes of a Hawk instance. Only the first three fields of the spec are currently populated. Returns list<DerivedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.addIndexedAttribute \u00b6 Add a new indexed attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the new indexed attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidIndexedAttributeSpec The indexed attribute specification is not valid. Hawk.removeIndexedAttribute \u00b6 Remove a indexed attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the indexed attribute to be removed. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.listIndexedAttributes \u00b6 Lists the indexed attributes of a Hawk instance. Returns list<IndexedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.getModel \u00b6 Returns the contents of one or more models indexed in a Hawk instance. Cross-model references are also resolved, and contained objects are always sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.getRootElements \u00b6 Returns the root objects of one or more models indexed in a Hawk instance. Node IDs are always sent, and contained objects are never sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. Hawk.watchStateChanges \u00b6 Returns subscription details to a queue of HawkStateEvents with notifications about changes in the indexer's state. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. Hawk.watchModelChanges \u00b6 Returns subscription details to a queue of HawkChangeEvents with notifications about changes to a set of indexed models. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repositoryUri string The URI of the repository in which the model is contained. filePath list The pattern(s) for the model file(s) in the repository. clientID string Unique client ID (used as suffix for the queue name). durableEvents SubscriptionDurability Durability of the subscription. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. IFCExport \u00b6 IFC export facility for getting IFC models from the Hawk server. IFCExport.exportAsSTEP \u00b6 Export part of a Hawk index in IFC STEP format. Returns IFCExportJob . Takes these parameters: Name Type Documentation hawkInstance string options IFCExportOptions IFCExport.getJobs \u00b6 List all the previously scheduled IFC export jobs. Returns list<IFCExportJob> . Does not take any parameters. IFCExport.getJobStatus \u00b6 Retrieve the current status of the job with the specified ID. Returns IFCExportJob . Takes these parameters: Name Type Documentation jobID string IFCExport.killJob \u00b6 Cancel the job with the specified ID. Returns bool . Takes these parameters: Name Type Documentation jobID string Users \u00b6 The majority of service operations provided by the server require user authentication (indicated in the top-left cell of each operation table) to prevent unaccountable use. As such, the platform needs to provide basic user management service operations for creating, updating and deleting user accounts. When handling passwords, only SSL should be used, as otherwise they could be intercepted. Users.createUser \u00b6 Creates a new platform user. Returns void . Takes these parameters: Name Type Documentation username string A unique identifier for the user. password string The desired password. profile UserProfile The profile of the user. May throw these exceptions: Name Documentation UserExists The specified username already exists. Users.updateProfile \u00b6 Updates the profile of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. profile UserProfile The updated profile of the user. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Users.updatePassword \u00b6 Updates the password of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. newPassword string New password to be set. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Users.deleteUser \u00b6 Deletes a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to delete. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist. Entities \u00b6 AttributeSlot \u00b6 Represents a slot that can store the value(s) of an attribute of a model element. Inherits from: Slot. Name Type Documentation name (inherited) string The name of the model element property the value of which is stored in this slot. value (optional) SlotValue Value of the slot (if set). Used in: ModelElement. CommitItem \u00b6 Simplified entry within a commit of a repository. Name Type Documentation path string Path within the repository, using / as separator. repoURL string URL of the repository. revision string Unique identifier of the revision of the repository. type CommitItemChangeType Type of change within the commit. Used in: HawkModelElementAdditionEvent, HawkModelElementRemovalEvent, HawkAttributeUpdateEvent, HawkAttributeRemovalEvent, HawkReferenceAdditionEvent, HawkReferenceRemovalEvent, HawkFileAdditionEvent, HawkFileRemovalEvent. ContainerSlot \u00b6 Represents a slot that can store other model elements within a model element. Inherits from: Slot. Name Type Documentation elements list Contained elements for this slot. name (inherited) string The name of the model element property the value of which is stored in this slot. Used in: ModelElement. Credentials \u00b6 Credentials of the client in the target VCS. Name Type Documentation password string Password for logging into the VCS. username string Username for logging into the VCS. Used in: Hawk.addRepository, Hawk.updateRepositoryCredentials. DerivedAttributeSpec \u00b6 Used to configure Hawk's derived attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the derived attribute. attributeType (optional) string The (primitive) type of the derived attribute. derivationLanguage (optional) string The language used to express the derivation logic. derivationLogic (optional) string An executable expression of the derivation logic in the language above. isMany (optional) bool The multiplicity of the derived attribute. isOrdered (optional) bool A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true). isUnique (optional) bool A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true). metamodelUri string The URI of the metamodel to which the derived attribute belongs. typeName string The name of the type to which the derived attribute belongs. Used in: Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes. EffectiveMetamodel \u00b6 Representation of a set of rules for either including or excluding certain types and/or slots within a metamodel. Name Type Documentation slots set Slots within the type that should be included or excluded: empty means 'all slots'. type string Type that should be included or excluded. Used in: EffectiveMetamodelMap. EffectiveMetamodelMap \u00b6 Representation of a set of rules for either including or excluding metamodels, types or slots. Name Type Documentation metamodel map > Types and slots within the metamodel that should be included or excluded: empty means 'all types and slots'. uri string Namespace URI of the metamodel. Used in: HawkQueryOptions, IFCExportOptions. File \u00b6 A file to be sent through the network. Name Type Documentation contents binary Sequence of bytes with the contents of the file. name string Name of the file. Used in: Hawk.registerMetamodels. HawkAttributeRemovalEvent \u00b6 Serialized form of an attribute removal event. Name Type Documentation attribute string Name of the attribute that was removed. id string Identifier of the model element that was changed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkAttributeUpdateEvent \u00b6 Serialized form of an attribute update event. Name Type Documentation attribute string Name of the attribute that was changed. id string Identifier of the model element that was changed. value SlotValue New value for the attribute. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkChangeEvent \u00b6 Serialized form of a change in the indexed models of a Hawk instance. Name Type Documentation fileAddition HawkFileAdditionEvent A file was added. fileRemoval HawkFileRemovalEvent A file was removed. modelElementAddition HawkModelElementAdditionEvent A model element was added. modelElementAttributeRemoval HawkAttributeRemovalEvent An attribute was removed. modelElementAttributeUpdate HawkAttributeUpdateEvent An attribute was updated. modelElementRemoval HawkModelElementRemovalEvent A model element was removed. referenceAddition HawkReferenceAdditionEvent A reference was added. referenceRemoval HawkReferenceRemovalEvent A reference was removed. syncEnd HawkSynchronizationEndEvent Synchronization ended. syncStart HawkSynchronizationStartEvent Synchronization started. HawkFileAdditionEvent \u00b6 Serialized form of a file addition event. Name Type Documentation vcsItem CommitItem Reference to file that was added, including VCS metadata. Used in: HawkChangeEvent. HawkFileRemovalEvent \u00b6 A file was removed. Name Type Documentation vcsItem CommitItem Reference to file that was removed, including VCS metadata. Used in: HawkChangeEvent. HawkInstance \u00b6 Status of a Hawk instance. Name Type Documentation message string Last info message from the instance. name string The name of the instance. state HawkState Current state of the instance. Used in: Hawk.listInstances. HawkModelElementAdditionEvent \u00b6 Serialized form of a model element addition event. Name Type Documentation id string Identifier of the model element that was added. metamodelURI string Metamodel URI of the type of the model element. typeName string Name of the type of the model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkModelElementRemovalEvent \u00b6 Serialized form of a model element removal event. Name Type Documentation id string Identifier of the model element that was removed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkQueryOptions \u00b6 Options for a Hawk query. Name Type Documentation defaultNamespaces (optional) string The default namespaces to be used to resolve ambiguous unqualified types. effectiveMetamodelExcludes (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. effectiveMetamodelIncludes (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeAttributes (optional) bool Whether to include attributes (true) or not (false) in model element results. includeContained (optional) bool Whether to include all the child elements of the model element results (true) or not (false). includeDerived (optional) bool Whether to include derived attributes (true) or not (false) in model element results. includeNodeIDs (optional) bool Whether to include node IDs (true) or not (false) in model element results. includeReferences (optional) bool Whether to include references (true) or not (false) in model element results. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: Hawk.query, Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements. HawkReferenceAdditionEvent \u00b6 Serialized form of a reference addition event. Name Type Documentation refName string Name of the reference that was added. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkReferenceRemovalEvent \u00b6 Serialized form of a reference removal event. Name Type Documentation refName string Name of the reference that was removed. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent. HawkStateEvent \u00b6 Serialized form of a change in the state of a Hawk instance. Name Type Documentation message string Short message about the current status of the server. state HawkState State of the Hawk instance. timestamp i64 Timestamp for this state change. HawkSynchronizationEndEvent \u00b6 Serialized form of a sync end event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent. HawkSynchronizationStartEvent \u00b6 Serialized form of a sync start event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent. IFCExportJob \u00b6 Information about a server-side IFC export job. Name Type Documentation jobID string message string status IFCExportStatus Used in: IFCExport.exportAsSTEP, IFCExport.getJobs, IFCExport.getJobStatus. IFCExportOptions \u00b6 Options for a server-side IFC export. Name Type Documentation excludeRules (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeRules (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: IFCExport.exportAsSTEP. IndexedAttributeSpec \u00b6 Used to configure Hawk's indexed attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the indexed attribute. metamodelUri string The URI of the metamodel to which the indexed attribute belongs. typeName string The name of the type to which the indexed attribute belongs. Used in: Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes. InvalidModelSpec \u00b6 The model specification is not valid: the model or the metamodels are inaccessible or invalid. Name Type Documentation reason string Reason for the spec not being valid. spec ModelSpec A copy of the invalid model specification. InvalidTransformation \u00b6 The transformation is not valid: it is unparsable or inconsistent. Name Type Documentation location string Location of the problem, if applicable. Usually a combination of line and column numbers. reason string Reason for the transformation not being valid. MixedReference \u00b6 Represents a reference to a model element: it can be an identifier or a position. Only used when the same ReferenceSlot has both identifier-based and position-based references. This may be the case if we are retrieving a subset of the model which has references between its elements and with elements outside the subset at the same time. Name Type Documentation id string Identifier-based reference to a model element. position i32 Position-based reference to a model element. Used in: ReferenceSlot. ModelElement \u00b6 Represents a model element. Name Type Documentation attributes (optional) list Slots holding the values of the model element's attributes, if any have been set. containers (optional) list Slots holding contained model elements, if any have been set. file (optional) string Name of the file to which the element belongs (not set if equal to that of the previous model element). id (optional) string Unique ID of the model element (not set if using position-based references). metamodelUri (optional) string URI of the metamodel to which the type of the element belongs (not set if equal to that of the previous model element). references (optional) list Slots holding the values of the model element's references, if any have been set. repositoryURL (optional) string URI of the repository to which the element belongs (not set if equal to that of the previous model element). typeName (optional) string Name of the type that the model element is an instance of (not set if equal to that of the previous model element). Used in: Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements, ContainerSlot, QueryResult. ModelElementType \u00b6 Represents a type of model element. Name Type Documentation attributes (optional) list Metadata for the attribute slots. id string Unique ID of the model element type. metamodelUri string URI of the metamodel to which the type belongs. references (optional) list Metadata for the reference slots. typeName string Name of the type. Used in: QueryResult. ModelSpec \u00b6 Captures information about source/target models of ATL transformations. Name Type Documentation metamodelUris list The URIs of the metamodels to which elements of the model conform. uri string The URI from which the model will be loaded or to which it will be persisted. Used in: InvalidModelSpec. QueryResult \u00b6 Union type for a scalar value, a reference to a model element, a heterogeneous list or a string/value map. Query results may return all types of results, so we need to stay flexible. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vByte (inherited) byte 8-bit signed integer value. vDouble (inherited) double 64-bit floating point value. vInteger (inherited) i32 32-bit signed integer value. vList list Nested list of query results. vLong (inherited) i64 64-bit signed integer value. vMap map Map between query results. vModelElement ModelElement Encoded model element. vModelElementType ModelElementType Encoded model element type. vShort (inherited) i16 16-bit signed integer value. vString (inherited) string Sequence of UTF8 characters. Used in: Hawk.query, QueryResult, QueryResultMap. QueryResultMap \u00b6 Name Type Documentation name string value QueryResult Used in: QueryResult. ReferenceSlot \u00b6 Represents a slot that can store the value(s) of a reference of a model element. References can be expressed as positions within a result tree (using pre-order traversal) or IDs. id, ids, position, positions and mixed are all mutually exclusive. At least one position or one ID must be given. Inherits from: Slot. Name Type Documentation id (optional) string Unique identifier of the referenced element (if there is only one ID based reference in this slot). ids (optional) list Unique identifiers of the referenced elements (if more than one). mixed (optional) list Mix of identifier- and position-bsaed references (if there is at least one position and one ID). name (inherited) string The name of the model element property the value of which is stored in this slot. position (optional) i32 Position of the referenced element (if there is only one position-based reference in this slot). positions (optional) list Positions of the referenced elements (if more than one). Used in: ModelElement. Repository \u00b6 Entity that represents a model repository. Name Type Documentation isFrozen (optional) bool True if the repository is frozen, false otherwise. type string The type of repository. uri string The URI to the repository. Used in: Hawk.addRepository, Hawk.listRepositories. Slot \u00b6 Represents a slot that can store the value(s) of a property of a model element. Inherited by: AttributeSlot, ReferenceSlot, ContainerSlot. Name Type Documentation name string The name of the model element property the value of which is stored in this slot. SlotMetadata \u00b6 Represents the metadata of a slot in a model element type. Name Type Documentation isMany bool True if this slot holds a collection of values instead of a single value. isOrdered bool True if the values in this slot are ordered. isUnique bool True if the value of this slot must be unique within its containing model. name string The name of the model element property that is stored in this slot. type string The type of the values in this slot. Used in: ModelElementType. SlotValue \u00b6 Union type for a single scalar value or a homogeneous collection of scalar values. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vBooleans list List of true/false values. vByte (inherited) byte 8-bit signed integer value. vBytes binary List of 8-bit signed integers. vDouble (inherited) double 64-bit floating point value. vDoubles list List of 64-bit floating point values. vInteger (inherited) i32 32-bit signed integer value. vIntegers list List of 32-bit signed integers. vLong (inherited) i64 64-bit signed integer value. vLongs list List of 64-bit signed integers. vShort (inherited) i16 16-bit signed integer value. vShorts list List of 16-bit signed integers. vString (inherited) string Sequence of UTF8 characters. vStrings list List of sequences of UTF8 characters. Used in: HawkAttributeUpdateEvent, AttributeSlot. Subscription \u00b6 Details about a subscription to a topic queue. Name Type Documentation host string Host name of the message queue server. port i32 Port in which the message queue server is listening. queueAddress string Address of the topic queue. queueName string Name of the topic queue. sslRequired bool Whether SSL is required or not. Used in: Hawk.watchStateChanges, Hawk.watchModelChanges. UserProfile \u00b6 Minimal details about registered users. Name Type Documentation admin bool Whether the user has admin rights (i.e. so that they can create new users, change the status of admin users etc). realName string The real name of the user. Used in: Users.createUser, Users.updateProfile. Value \u00b6 Union type for a single scalar value. Inherited by: QueryResult, SlotValue. Name Type Documentation vBoolean bool Boolean (true/false) value. vByte byte 8-bit signed integer value. vDouble double 64-bit floating point value. vInteger i32 32-bit signed integer value. vLong i64 64-bit signed integer value. vShort i16 16-bit signed integer value. vString string Sequence of UTF8 characters. Enumerations \u00b6 CommitItemChangeType \u00b6 Type of change within a commit. Name Documentation ADDED File was added. DELETED File was removed. REPLACED File was removed. UNKNOWN Unknown type of change. UPDATED File was updated. HawkState \u00b6 One of the states that a Hawk instance can be in. Name Documentation RUNNING The instance is running and monitoring the indexed locations. STOPPED The instance is stopped and is not monitoring any indexed locations. UPDATING The instance is updating its contents from the indexed locations. IFCExportStatus \u00b6 Status of a server-side IFC export job. Name Documentation CANCELLED The job has been cancelled. DONE The job is completed. FAILED The job has failed. RUNNING The job is currently running. SCHEDULED The job has been scheduled but has not started yet. SubscriptionDurability \u00b6 Durability of a subscription. Name Documentation DEFAULT Subscription survives client disconnections but not server restarts. DURABLE Subscription survives client disconnections and server restarts. TEMPORARY Subscription removed after disconnecting. ## Exceptions FailedQuery \u00b6 The specified query failed to complete its execution. Name Type Documentation reason string Reason for the query failing to complete its execution. Used in: Hawk.query. HawkInstanceNotFound \u00b6 No Hawk instance exists with that name. No fields for this entity. Used in: Hawk.removeInstance, Hawk.startInstance, Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges. HawkInstanceNotRunning \u00b6 The selected Hawk instance is not running. No fields for this entity. Used in: Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges. InvalidDerivedAttributeSpec \u00b6 The derived attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addDerivedAttribute. InvalidIndexedAttributeSpec \u00b6 The indexed attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addIndexedAttribute. InvalidMetamodel \u00b6 The provided metamodel is not valid (e.g. unparsable or inconsistent). Name Type Documentation reason string Reason for the metamodel not being valid. Used in: Hawk.registerMetamodels. InvalidPollingConfiguration \u00b6 The polling configuration is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.configurePolling. InvalidQuery \u00b6 The specified query is not valid. Name Type Documentation reason string Reason for the query not being valid. Used in: Hawk.query. UnknownQueryLanguage \u00b6 The specified query language is not supported by the operation. No fields for this entity. Used in: Hawk.query. UnknownRepositoryType \u00b6 The specified repository type is not supported by the operation. No fields for this entity. Used in: Hawk.addRepository. UserExists \u00b6 The specified username already exists. No fields for this entity. Used in: Users.createUser. UserNotFound \u00b6 The specified username does not exist. No fields for this entity. Used in: Users.updateProfile, Users.updatePassword, Users.deleteUser. VCSAuthenticationFailed \u00b6 The client failed to prove its identity in the VCS. No fields for this entity. Used in: Hawk.addRepository. This file was automatically generated by Ecore2Thrift. https://github.com/bluezio/ecore2thrift","title":"Thrift API"},{"location":"server/api/#services","text":"","title":"Services"},{"location":"server/api/#hawk","text":"The following service operations expose the capabilities of the Hawk heterogeneous model indexing framework.","title":"Hawk"},{"location":"server/api/#hawkcreateinstance","text":"Creates a new Hawk instance (stopped). Returns void . Takes these parameters: Name Type Documentation name string The unique name of the new Hawk instance. backend string The name of the backend to be used, as returned by listBackends(). minimumDelayMillis i32 Minimum delay between periodic synchronization in milliseconds. maximumDelayMillis i32 Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization). enabledPlugins (optional) list List of plugins to be enabled: if not set, all plugins are enabled.","title":"Hawk.createInstance"},{"location":"server/api/#hawklistbackends","text":"Lists the names of the available storage backends. Returns list<string> . Does not take any parameters.","title":"Hawk.listBackends"},{"location":"server/api/#hawklistplugins","text":"Lists all the Hawk plugins that can be enabled or disabled: metamodel parsers, model parsers and graph change listeners. Returns list<string> . Does not take any parameters.","title":"Hawk.listPlugins"},{"location":"server/api/#hawklistinstances","text":"Lists the details of all Hawk instances. Returns list<HawkInstance> . Does not take any parameters.","title":"Hawk.listInstances"},{"location":"server/api/#hawkremoveinstance","text":"Removes an existing Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to remove. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name.","title":"Hawk.removeInstance"},{"location":"server/api/#hawkstartinstance","text":"Starts a stopped Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to start. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name.","title":"Hawk.startInstance"},{"location":"server/api/#hawkstopinstance","text":"Stops a running Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.stopInstance"},{"location":"server/api/#hawksyncinstance","text":"Forces an immediate synchronization on a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance to stop. blockUntilDone (optional) bool If true, blocks the call until the synchronisation completes. False by default. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.syncInstance"},{"location":"server/api/#hawkregistermetamodels","text":"Registers a set of file-based metamodels with a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The metamodels to register. More than one metamodel file can be provided in one request, to accomodate fragmented metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. InvalidMetamodel The provided metamodel is not valid (e.g. unparsable or inconsistent). HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.registerMetamodels"},{"location":"server/api/#hawkunregistermetamodels","text":"Unregisters a metamodel from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. metamodel list The URIs of the metamodels. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.unregisterMetamodels"},{"location":"server/api/#hawklistmetamodels","text":"Lists the URIs of the registered metamodels of a Hawk instance. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listMetamodels"},{"location":"server/api/#hawklistquerylanguages","text":"Lists the supported query languages and their status. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance.","title":"Hawk.listQueryLanguages"},{"location":"server/api/#hawkquery","text":"Runs a query on a Hawk instance and returns a sequence of scalar values and/or model elements. Returns QueryResult . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. query string The query to be executed. language string The name of the query language used (e.g. EOL, OCL). options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownQueryLanguage The specified query language is not supported by the operation. InvalidQuery The specified query is not valid. FailedQuery The specified query failed to complete its execution.","title":"Hawk.query"},{"location":"server/api/#hawkresolveproxies","text":"Returns populated model elements for the provided proxies. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. ids list Proxy model element IDs to be resolved. options HawkQueryOptions Options for the query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.resolveProxies"},{"location":"server/api/#hawkaddrepository","text":"Asks a Hawk instance to start monitoring a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repo Repository The repository to monitor. credentials (optional) Credentials A valid set of credentials that has read-access to the repository. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. UnknownRepositoryType The specified repository type is not supported by the operation. VCSAuthenticationFailed The client failed to prove its identity in the VCS.","title":"Hawk.addRepository"},{"location":"server/api/#hawkisfrozen","text":"Returns true if a repository is frozen, false otherwise. Returns bool . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to query. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.isFrozen"},{"location":"server/api/#hawksetfrozen","text":"Changes the 'frozen' state of a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to be changed. isFrozen bool Whether the repository should be frozen (true) or not (false). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.setFrozen"},{"location":"server/api/#hawkremoverepository","text":"Asks a Hawk instance to stop monitoring a repository and remove its elements from the graph. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to stop monitoring. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeRepository"},{"location":"server/api/#hawkupdaterepositorycredentials","text":"Changes the credentials used to monitor a repository. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. uri string The URI of the repository to update. cred Credentials The new credentials to be used. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.updateRepositoryCredentials"},{"location":"server/api/#hawklistrepositories","text":"Lists the repositories monitored by a Hawk instance. Returns list<Repository> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listRepositories"},{"location":"server/api/#hawklistrepositorytypes","text":"Lists the available repository types in this installation. Returns list<string> . Does not take any parameters.","title":"Hawk.listRepositoryTypes"},{"location":"server/api/#hawklistfiles","text":"Lists the paths of the files of the indexed repository. Returns list<string> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repository list The URI of the indexed repository. filePatterns list File name patterns to search for (* lists all files). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listFiles"},{"location":"server/api/#hawkconfigurepolling","text":"Sets the base polling period and max interval of a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. base i32 The base polling period (in seconds). max i32 The maximum polling interval (in seconds). May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidPollingConfiguration The polling configuration is not valid.","title":"Hawk.configurePolling"},{"location":"server/api/#hawkaddderivedattribute","text":"Add a new derived attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the new derived attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidDerivedAttributeSpec The derived attribute specification is not valid.","title":"Hawk.addDerivedAttribute"},{"location":"server/api/#hawkremovederivedattribute","text":"Remove a derived attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec DerivedAttributeSpec The details of the derived attribute to be removed. Only the first three fields of the spec need to be populated. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeDerivedAttribute"},{"location":"server/api/#hawklistderivedattributes","text":"Lists the derived attributes of a Hawk instance. Only the first three fields of the spec are currently populated. Returns list<DerivedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listDerivedAttributes"},{"location":"server/api/#hawkaddindexedattribute","text":"Add a new indexed attribute to a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the new indexed attribute. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running. InvalidIndexedAttributeSpec The indexed attribute specification is not valid.","title":"Hawk.addIndexedAttribute"},{"location":"server/api/#hawkremoveindexedattribute","text":"Remove a indexed attribute from a Hawk instance. Returns void . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. spec IndexedAttributeSpec The details of the indexed attribute to be removed. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.removeIndexedAttribute"},{"location":"server/api/#hawklistindexedattributes","text":"Lists the indexed attributes of a Hawk instance. Returns list<IndexedAttributeSpec> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.listIndexedAttributes"},{"location":"server/api/#hawkgetmodel","text":"Returns the contents of one or more models indexed in a Hawk instance. Cross-model references are also resolved, and contained objects are always sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.getModel"},{"location":"server/api/#hawkgetrootelements","text":"Returns the root objects of one or more models indexed in a Hawk instance. Node IDs are always sent, and contained objects are never sent. Returns list<ModelElement> . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. options HawkQueryOptions Options to limit the contents to be sent.","title":"Hawk.getRootElements"},{"location":"server/api/#hawkwatchstatechanges","text":"Returns subscription details to a queue of HawkStateEvents with notifications about changes in the indexer's state. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.watchStateChanges"},{"location":"server/api/#hawkwatchmodelchanges","text":"Returns subscription details to a queue of HawkChangeEvents with notifications about changes to a set of indexed models. Returns Subscription . Takes these parameters: Name Type Documentation name string The name of the Hawk instance. repositoryUri string The URI of the repository in which the model is contained. filePath list The pattern(s) for the model file(s) in the repository. clientID string Unique client ID (used as suffix for the queue name). durableEvents SubscriptionDurability Durability of the subscription. May throw these exceptions: Name Documentation HawkInstanceNotFound No Hawk instance exists with that name. HawkInstanceNotRunning The selected Hawk instance is not running.","title":"Hawk.watchModelChanges"},{"location":"server/api/#ifcexport","text":"IFC export facility for getting IFC models from the Hawk server.","title":"IFCExport"},{"location":"server/api/#ifcexportexportasstep","text":"Export part of a Hawk index in IFC STEP format. Returns IFCExportJob . Takes these parameters: Name Type Documentation hawkInstance string options IFCExportOptions","title":"IFCExport.exportAsSTEP"},{"location":"server/api/#ifcexportgetjobs","text":"List all the previously scheduled IFC export jobs. Returns list<IFCExportJob> . Does not take any parameters.","title":"IFCExport.getJobs"},{"location":"server/api/#ifcexportgetjobstatus","text":"Retrieve the current status of the job with the specified ID. Returns IFCExportJob . Takes these parameters: Name Type Documentation jobID string","title":"IFCExport.getJobStatus"},{"location":"server/api/#ifcexportkilljob","text":"Cancel the job with the specified ID. Returns bool . Takes these parameters: Name Type Documentation jobID string","title":"IFCExport.killJob"},{"location":"server/api/#users","text":"The majority of service operations provided by the server require user authentication (indicated in the top-left cell of each operation table) to prevent unaccountable use. As such, the platform needs to provide basic user management service operations for creating, updating and deleting user accounts. When handling passwords, only SSL should be used, as otherwise they could be intercepted.","title":"Users"},{"location":"server/api/#userscreateuser","text":"Creates a new platform user. Returns void . Takes these parameters: Name Type Documentation username string A unique identifier for the user. password string The desired password. profile UserProfile The profile of the user. May throw these exceptions: Name Documentation UserExists The specified username already exists.","title":"Users.createUser"},{"location":"server/api/#usersupdateprofile","text":"Updates the profile of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. profile UserProfile The updated profile of the user. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.updateProfile"},{"location":"server/api/#usersupdatepassword","text":"Updates the password of a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to update the profile of. newPassword string New password to be set. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.updatePassword"},{"location":"server/api/#usersdeleteuser","text":"Deletes a platform user. Returns void . Takes these parameters: Name Type Documentation username string The name of the user to delete. May throw these exceptions: Name Documentation UserNotFound The specified username does not exist.","title":"Users.deleteUser"},{"location":"server/api/#entities","text":"","title":"Entities"},{"location":"server/api/#attributeslot","text":"Represents a slot that can store the value(s) of an attribute of a model element. Inherits from: Slot. Name Type Documentation name (inherited) string The name of the model element property the value of which is stored in this slot. value (optional) SlotValue Value of the slot (if set). Used in: ModelElement.","title":"AttributeSlot"},{"location":"server/api/#commititem","text":"Simplified entry within a commit of a repository. Name Type Documentation path string Path within the repository, using / as separator. repoURL string URL of the repository. revision string Unique identifier of the revision of the repository. type CommitItemChangeType Type of change within the commit. Used in: HawkModelElementAdditionEvent, HawkModelElementRemovalEvent, HawkAttributeUpdateEvent, HawkAttributeRemovalEvent, HawkReferenceAdditionEvent, HawkReferenceRemovalEvent, HawkFileAdditionEvent, HawkFileRemovalEvent.","title":"CommitItem"},{"location":"server/api/#containerslot","text":"Represents a slot that can store other model elements within a model element. Inherits from: Slot. Name Type Documentation elements list Contained elements for this slot. name (inherited) string The name of the model element property the value of which is stored in this slot. Used in: ModelElement.","title":"ContainerSlot"},{"location":"server/api/#credentials","text":"Credentials of the client in the target VCS. Name Type Documentation password string Password for logging into the VCS. username string Username for logging into the VCS. Used in: Hawk.addRepository, Hawk.updateRepositoryCredentials.","title":"Credentials"},{"location":"server/api/#derivedattributespec","text":"Used to configure Hawk's derived attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the derived attribute. attributeType (optional) string The (primitive) type of the derived attribute. derivationLanguage (optional) string The language used to express the derivation logic. derivationLogic (optional) string An executable expression of the derivation logic in the language above. isMany (optional) bool The multiplicity of the derived attribute. isOrdered (optional) bool A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true). isUnique (optional) bool A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true). metamodelUri string The URI of the metamodel to which the derived attribute belongs. typeName string The name of the type to which the derived attribute belongs. Used in: Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes.","title":"DerivedAttributeSpec"},{"location":"server/api/#effectivemetamodel","text":"Representation of a set of rules for either including or excluding certain types and/or slots within a metamodel. Name Type Documentation slots set Slots within the type that should be included or excluded: empty means 'all slots'. type string Type that should be included or excluded. Used in: EffectiveMetamodelMap.","title":"EffectiveMetamodel"},{"location":"server/api/#effectivemetamodelmap","text":"Representation of a set of rules for either including or excluding metamodels, types or slots. Name Type Documentation metamodel map > Types and slots within the metamodel that should be included or excluded: empty means 'all types and slots'. uri string Namespace URI of the metamodel. Used in: HawkQueryOptions, IFCExportOptions.","title":"EffectiveMetamodelMap"},{"location":"server/api/#file","text":"A file to be sent through the network. Name Type Documentation contents binary Sequence of bytes with the contents of the file. name string Name of the file. Used in: Hawk.registerMetamodels.","title":"File"},{"location":"server/api/#hawkattributeremovalevent","text":"Serialized form of an attribute removal event. Name Type Documentation attribute string Name of the attribute that was removed. id string Identifier of the model element that was changed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkAttributeRemovalEvent"},{"location":"server/api/#hawkattributeupdateevent","text":"Serialized form of an attribute update event. Name Type Documentation attribute string Name of the attribute that was changed. id string Identifier of the model element that was changed. value SlotValue New value for the attribute. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkAttributeUpdateEvent"},{"location":"server/api/#hawkchangeevent","text":"Serialized form of a change in the indexed models of a Hawk instance. Name Type Documentation fileAddition HawkFileAdditionEvent A file was added. fileRemoval HawkFileRemovalEvent A file was removed. modelElementAddition HawkModelElementAdditionEvent A model element was added. modelElementAttributeRemoval HawkAttributeRemovalEvent An attribute was removed. modelElementAttributeUpdate HawkAttributeUpdateEvent An attribute was updated. modelElementRemoval HawkModelElementRemovalEvent A model element was removed. referenceAddition HawkReferenceAdditionEvent A reference was added. referenceRemoval HawkReferenceRemovalEvent A reference was removed. syncEnd HawkSynchronizationEndEvent Synchronization ended. syncStart HawkSynchronizationStartEvent Synchronization started.","title":"HawkChangeEvent"},{"location":"server/api/#hawkfileadditionevent","text":"Serialized form of a file addition event. Name Type Documentation vcsItem CommitItem Reference to file that was added, including VCS metadata. Used in: HawkChangeEvent.","title":"HawkFileAdditionEvent"},{"location":"server/api/#hawkfileremovalevent","text":"A file was removed. Name Type Documentation vcsItem CommitItem Reference to file that was removed, including VCS metadata. Used in: HawkChangeEvent.","title":"HawkFileRemovalEvent"},{"location":"server/api/#hawkinstance","text":"Status of a Hawk instance. Name Type Documentation message string Last info message from the instance. name string The name of the instance. state HawkState Current state of the instance. Used in: Hawk.listInstances.","title":"HawkInstance"},{"location":"server/api/#hawkmodelelementadditionevent","text":"Serialized form of a model element addition event. Name Type Documentation id string Identifier of the model element that was added. metamodelURI string Metamodel URI of the type of the model element. typeName string Name of the type of the model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkModelElementAdditionEvent"},{"location":"server/api/#hawkmodelelementremovalevent","text":"Serialized form of a model element removal event. Name Type Documentation id string Identifier of the model element that was removed. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkModelElementRemovalEvent"},{"location":"server/api/#hawkqueryoptions","text":"Options for a Hawk query. Name Type Documentation defaultNamespaces (optional) string The default namespaces to be used to resolve ambiguous unqualified types. effectiveMetamodelExcludes (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. effectiveMetamodelIncludes (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeAttributes (optional) bool Whether to include attributes (true) or not (false) in model element results. includeContained (optional) bool Whether to include all the child elements of the model element results (true) or not (false). includeDerived (optional) bool Whether to include derived attributes (true) or not (false) in model element results. includeNodeIDs (optional) bool Whether to include node IDs (true) or not (false) in model element results. includeReferences (optional) bool Whether to include references (true) or not (false) in model element results. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: Hawk.query, Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements.","title":"HawkQueryOptions"},{"location":"server/api/#hawkreferenceadditionevent","text":"Serialized form of a reference addition event. Name Type Documentation refName string Name of the reference that was added. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkReferenceAdditionEvent"},{"location":"server/api/#hawkreferenceremovalevent","text":"Serialized form of a reference removal event. Name Type Documentation refName string Name of the reference that was removed. sourceId string Identifier of the source model element. targetId string Identifier of the target model element. vcsItem CommitItem Entry within the commit that produced the changes. Used in: HawkChangeEvent.","title":"HawkReferenceRemovalEvent"},{"location":"server/api/#hawkstateevent","text":"Serialized form of a change in the state of a Hawk instance. Name Type Documentation message string Short message about the current status of the server. state HawkState State of the Hawk instance. timestamp i64 Timestamp for this state change.","title":"HawkStateEvent"},{"location":"server/api/#hawksynchronizationendevent","text":"Serialized form of a sync end event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent.","title":"HawkSynchronizationEndEvent"},{"location":"server/api/#hawksynchronizationstartevent","text":"Serialized form of a sync start event. Name Type Documentation timestampNanos i64 Local timestamp, measured in nanoseconds. Only meant to be used to compute synchronization cost. Used in: HawkChangeEvent.","title":"HawkSynchronizationStartEvent"},{"location":"server/api/#ifcexportjob","text":"Information about a server-side IFC export job. Name Type Documentation jobID string message string status IFCExportStatus Used in: IFCExport.exportAsSTEP, IFCExport.getJobs, IFCExport.getJobStatus.","title":"IFCExportJob"},{"location":"server/api/#ifcexportoptions","text":"Options for a server-side IFC export. Name Type Documentation excludeRules (optional) map >> If set and not empty, the mentioned metamodels, types and features will not be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. filePatterns (optional) list The file patterns for the query (e.g. *.uml). includeRules (optional) map >> If set and not empty, only the specified metamodels, types and features will be fetched. Otherwise, everything that is not excluded will be fetched. The string '*' can be used to refer to all types within a metamodel or all fields within a type. repositoryPattern (optional) string The repository for the query (or * for all repositories). Used in: IFCExport.exportAsSTEP.","title":"IFCExportOptions"},{"location":"server/api/#indexedattributespec","text":"Used to configure Hawk's indexed attributes (discussed in D5.3). Name Type Documentation attributeName string The name of the indexed attribute. metamodelUri string The URI of the metamodel to which the indexed attribute belongs. typeName string The name of the type to which the indexed attribute belongs. Used in: Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes.","title":"IndexedAttributeSpec"},{"location":"server/api/#invalidmodelspec","text":"The model specification is not valid: the model or the metamodels are inaccessible or invalid. Name Type Documentation reason string Reason for the spec not being valid. spec ModelSpec A copy of the invalid model specification.","title":"InvalidModelSpec"},{"location":"server/api/#invalidtransformation","text":"The transformation is not valid: it is unparsable or inconsistent. Name Type Documentation location string Location of the problem, if applicable. Usually a combination of line and column numbers. reason string Reason for the transformation not being valid.","title":"InvalidTransformation"},{"location":"server/api/#mixedreference","text":"Represents a reference to a model element: it can be an identifier or a position. Only used when the same ReferenceSlot has both identifier-based and position-based references. This may be the case if we are retrieving a subset of the model which has references between its elements and with elements outside the subset at the same time. Name Type Documentation id string Identifier-based reference to a model element. position i32 Position-based reference to a model element. Used in: ReferenceSlot.","title":"MixedReference"},{"location":"server/api/#modelelement","text":"Represents a model element. Name Type Documentation attributes (optional) list Slots holding the values of the model element's attributes, if any have been set. containers (optional) list Slots holding contained model elements, if any have been set. file (optional) string Name of the file to which the element belongs (not set if equal to that of the previous model element). id (optional) string Unique ID of the model element (not set if using position-based references). metamodelUri (optional) string URI of the metamodel to which the type of the element belongs (not set if equal to that of the previous model element). references (optional) list Slots holding the values of the model element's references, if any have been set. repositoryURL (optional) string URI of the repository to which the element belongs (not set if equal to that of the previous model element). typeName (optional) string Name of the type that the model element is an instance of (not set if equal to that of the previous model element). Used in: Hawk.resolveProxies, Hawk.getModel, Hawk.getRootElements, ContainerSlot, QueryResult.","title":"ModelElement"},{"location":"server/api/#modelelementtype","text":"Represents a type of model element. Name Type Documentation attributes (optional) list Metadata for the attribute slots. id string Unique ID of the model element type. metamodelUri string URI of the metamodel to which the type belongs. references (optional) list Metadata for the reference slots. typeName string Name of the type. Used in: QueryResult.","title":"ModelElementType"},{"location":"server/api/#modelspec","text":"Captures information about source/target models of ATL transformations. Name Type Documentation metamodelUris list The URIs of the metamodels to which elements of the model conform. uri string The URI from which the model will be loaded or to which it will be persisted. Used in: InvalidModelSpec.","title":"ModelSpec"},{"location":"server/api/#queryresult","text":"Union type for a scalar value, a reference to a model element, a heterogeneous list or a string/value map. Query results may return all types of results, so we need to stay flexible. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vByte (inherited) byte 8-bit signed integer value. vDouble (inherited) double 64-bit floating point value. vInteger (inherited) i32 32-bit signed integer value. vList list Nested list of query results. vLong (inherited) i64 64-bit signed integer value. vMap map Map between query results. vModelElement ModelElement Encoded model element. vModelElementType ModelElementType Encoded model element type. vShort (inherited) i16 16-bit signed integer value. vString (inherited) string Sequence of UTF8 characters. Used in: Hawk.query, QueryResult, QueryResultMap.","title":"QueryResult"},{"location":"server/api/#queryresultmap","text":"Name Type Documentation name string value QueryResult Used in: QueryResult.","title":"QueryResultMap"},{"location":"server/api/#referenceslot","text":"Represents a slot that can store the value(s) of a reference of a model element. References can be expressed as positions within a result tree (using pre-order traversal) or IDs. id, ids, position, positions and mixed are all mutually exclusive. At least one position or one ID must be given. Inherits from: Slot. Name Type Documentation id (optional) string Unique identifier of the referenced element (if there is only one ID based reference in this slot). ids (optional) list Unique identifiers of the referenced elements (if more than one). mixed (optional) list Mix of identifier- and position-bsaed references (if there is at least one position and one ID). name (inherited) string The name of the model element property the value of which is stored in this slot. position (optional) i32 Position of the referenced element (if there is only one position-based reference in this slot). positions (optional) list Positions of the referenced elements (if more than one). Used in: ModelElement.","title":"ReferenceSlot"},{"location":"server/api/#repository","text":"Entity that represents a model repository. Name Type Documentation isFrozen (optional) bool True if the repository is frozen, false otherwise. type string The type of repository. uri string The URI to the repository. Used in: Hawk.addRepository, Hawk.listRepositories.","title":"Repository"},{"location":"server/api/#slot","text":"Represents a slot that can store the value(s) of a property of a model element. Inherited by: AttributeSlot, ReferenceSlot, ContainerSlot. Name Type Documentation name string The name of the model element property the value of which is stored in this slot.","title":"Slot"},{"location":"server/api/#slotmetadata","text":"Represents the metadata of a slot in a model element type. Name Type Documentation isMany bool True if this slot holds a collection of values instead of a single value. isOrdered bool True if the values in this slot are ordered. isUnique bool True if the value of this slot must be unique within its containing model. name string The name of the model element property that is stored in this slot. type string The type of the values in this slot. Used in: ModelElementType.","title":"SlotMetadata"},{"location":"server/api/#slotvalue","text":"Union type for a single scalar value or a homogeneous collection of scalar values. Inherits from: Value. Name Type Documentation vBoolean (inherited) bool Boolean (true/false) value. vBooleans list List of true/false values. vByte (inherited) byte 8-bit signed integer value. vBytes binary List of 8-bit signed integers. vDouble (inherited) double 64-bit floating point value. vDoubles list List of 64-bit floating point values. vInteger (inherited) i32 32-bit signed integer value. vIntegers list List of 32-bit signed integers. vLong (inherited) i64 64-bit signed integer value. vLongs list List of 64-bit signed integers. vShort (inherited) i16 16-bit signed integer value. vShorts list List of 16-bit signed integers. vString (inherited) string Sequence of UTF8 characters. vStrings list List of sequences of UTF8 characters. Used in: HawkAttributeUpdateEvent, AttributeSlot.","title":"SlotValue"},{"location":"server/api/#subscription","text":"Details about a subscription to a topic queue. Name Type Documentation host string Host name of the message queue server. port i32 Port in which the message queue server is listening. queueAddress string Address of the topic queue. queueName string Name of the topic queue. sslRequired bool Whether SSL is required or not. Used in: Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"Subscription"},{"location":"server/api/#userprofile","text":"Minimal details about registered users. Name Type Documentation admin bool Whether the user has admin rights (i.e. so that they can create new users, change the status of admin users etc). realName string The real name of the user. Used in: Users.createUser, Users.updateProfile.","title":"UserProfile"},{"location":"server/api/#value","text":"Union type for a single scalar value. Inherited by: QueryResult, SlotValue. Name Type Documentation vBoolean bool Boolean (true/false) value. vByte byte 8-bit signed integer value. vDouble double 64-bit floating point value. vInteger i32 32-bit signed integer value. vLong i64 64-bit signed integer value. vShort i16 16-bit signed integer value. vString string Sequence of UTF8 characters.","title":"Value"},{"location":"server/api/#enumerations","text":"","title":"Enumerations"},{"location":"server/api/#commititemchangetype","text":"Type of change within a commit. Name Documentation ADDED File was added. DELETED File was removed. REPLACED File was removed. UNKNOWN Unknown type of change. UPDATED File was updated.","title":"CommitItemChangeType"},{"location":"server/api/#hawkstate","text":"One of the states that a Hawk instance can be in. Name Documentation RUNNING The instance is running and monitoring the indexed locations. STOPPED The instance is stopped and is not monitoring any indexed locations. UPDATING The instance is updating its contents from the indexed locations.","title":"HawkState"},{"location":"server/api/#ifcexportstatus","text":"Status of a server-side IFC export job. Name Documentation CANCELLED The job has been cancelled. DONE The job is completed. FAILED The job has failed. RUNNING The job is currently running. SCHEDULED The job has been scheduled but has not started yet.","title":"IFCExportStatus"},{"location":"server/api/#subscriptiondurability","text":"Durability of a subscription. Name Documentation DEFAULT Subscription survives client disconnections but not server restarts. DURABLE Subscription survives client disconnections and server restarts. TEMPORARY Subscription removed after disconnecting. ## Exceptions","title":"SubscriptionDurability"},{"location":"server/api/#failedquery","text":"The specified query failed to complete its execution. Name Type Documentation reason string Reason for the query failing to complete its execution. Used in: Hawk.query.","title":"FailedQuery"},{"location":"server/api/#hawkinstancenotfound","text":"No Hawk instance exists with that name. No fields for this entity. Used in: Hawk.removeInstance, Hawk.startInstance, Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"HawkInstanceNotFound"},{"location":"server/api/#hawkinstancenotrunning","text":"The selected Hawk instance is not running. No fields for this entity. Used in: Hawk.stopInstance, Hawk.syncInstance, Hawk.registerMetamodels, Hawk.unregisterMetamodels, Hawk.listMetamodels, Hawk.query, Hawk.resolveProxies, Hawk.addRepository, Hawk.isFrozen, Hawk.setFrozen, Hawk.removeRepository, Hawk.updateRepositoryCredentials, Hawk.listRepositories, Hawk.listFiles, Hawk.configurePolling, Hawk.addDerivedAttribute, Hawk.removeDerivedAttribute, Hawk.listDerivedAttributes, Hawk.addIndexedAttribute, Hawk.removeIndexedAttribute, Hawk.listIndexedAttributes, Hawk.getModel, Hawk.watchStateChanges, Hawk.watchModelChanges.","title":"HawkInstanceNotRunning"},{"location":"server/api/#invalidderivedattributespec","text":"The derived attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addDerivedAttribute.","title":"InvalidDerivedAttributeSpec"},{"location":"server/api/#invalidindexedattributespec","text":"The indexed attribute specification is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.addIndexedAttribute.","title":"InvalidIndexedAttributeSpec"},{"location":"server/api/#invalidmetamodel","text":"The provided metamodel is not valid (e.g. unparsable or inconsistent). Name Type Documentation reason string Reason for the metamodel not being valid. Used in: Hawk.registerMetamodels.","title":"InvalidMetamodel"},{"location":"server/api/#invalidpollingconfiguration","text":"The polling configuration is not valid. Name Type Documentation reason string Reason for the spec not being valid. Used in: Hawk.configurePolling.","title":"InvalidPollingConfiguration"},{"location":"server/api/#invalidquery","text":"The specified query is not valid. Name Type Documentation reason string Reason for the query not being valid. Used in: Hawk.query.","title":"InvalidQuery"},{"location":"server/api/#unknownquerylanguage","text":"The specified query language is not supported by the operation. No fields for this entity. Used in: Hawk.query.","title":"UnknownQueryLanguage"},{"location":"server/api/#unknownrepositorytype","text":"The specified repository type is not supported by the operation. No fields for this entity. Used in: Hawk.addRepository.","title":"UnknownRepositoryType"},{"location":"server/api/#userexists","text":"The specified username already exists. No fields for this entity. Used in: Users.createUser.","title":"UserExists"},{"location":"server/api/#usernotfound","text":"The specified username does not exist. No fields for this entity. Used in: Users.updateProfile, Users.updatePassword, Users.deleteUser.","title":"UserNotFound"},{"location":"server/api/#vcsauthenticationfailed","text":"The client failed to prove its identity in the VCS. No fields for this entity. Used in: Hawk.addRepository. This file was automatically generated by Ecore2Thrift. https://github.com/bluezio/ecore2thrift","title":"VCSAuthenticationFailed"},{"location":"server/architecture/","text":"If an entire team is querying the same set of models, indexing them from a central location is more efficient than maintaining multiple indexes. In other cases, we may want to query models from outside Eclipse and even from applications written in other languages (e.g. C++ or Python). To support these use cases, Hawk includes a server that exposes its functionality through a set of Thrift APIs. This server product is a headless Eclipse application that can be run from the command line. The general structure is as shown here: The server component is implemented as an Eclipse application, based on the Eclipse Equinox OSGi runtime. Using Eclipse Equinox for the server allows for integrating the Eclipse-based tools with very few changes in their code, while reducing the chances of mutual interference. The OSGi class loading mechanisms ensure that each plugin only \"sees\" the classes that it declares as dependencies, avoiding common clashes such as requiring different versions of the same Java library or overriding a configuration file with an unexpected copy from another library. To mitigate the risk of connectivity problems due to enterprise firewalls, the server uses for most of the API the standard HTTP and HTTPS protocols (by default, on the unprivileged ports 8080 and 8443) and secures them through Apache Shiro . Optionally, the Hawk API can be exposed through raw TCP on port 2080, for increased performance: however, security-conscious environments should leave it disabled as it does not support authentication. The embedded Apache Artemis messaging queue required for remote change notifications in Hawk requires its own port, as it manages its own network connections. By default, this is port 61616. These notifications are made available through two protocols: Artemis Core (a lightweight replacement for the Java Message Service, for Java clients) and STOMP over WebSockets (a cross-language messaging protocol, for web-based clients). The server includes plugins that use the standard OSGi HttpService facilities to register servlets and filters. Each service is implemented as one or more of these servlets. The currently implemented endpoints are these: Path within server Service Thrift protocol /thrift/hawk/binary Hawk Binary /thrift/hawk/compact Hawk Compact /thrift/hawk/json Hawk JSON /thrift/hawk/tuple Hawk Tuple /thrift/users Users JSON All services provide a JSON endpoint, since it is compatible across all languages supported by Thrift and works well with web-based clients. However, since Hawk is performance sensitive (as we might need to encode a large number of model elements in the results of a query), it also provides endpoints with the other Thrift protocols. Binary is the most portable after JSON, and Tuple is the most efficient but is only usable from Java clients. Having all four protocols allows Hawk clients to pick the most efficient protocol that is available for their language. The available operations for the Users and Hawk APIs are listed in Thrift API . For details about the optional access control to these APIs, check Thrift API security .","title":"Architecture"},{"location":"server/cli/","text":"You can talk to a Hawk server from one of the console client products in the latest release . Using the product only requires unpacking the product and running the main executable within it. Alternatively, you could install the \"Hawk CLI Feature\" into your Eclipse instance and use these commands from the \"Host OSGi Console\" in the Console view. Each Thrift API has its own set of commands. Hawk \u00b6 You can use the hawkHelp command to list all the available commands. Connecting to Hawk \u00b6 Name Description hawkConnect <url> [username] [password] Connects to a Thrift endpoint (guesses the protocol from the URL) hawkDisconnect Disconnects from the current Thrift endpoint Managing Hawk indexer instances \u00b6 Name Description hawkAddInstance <name> <backend> [minDelay] [maxDelay|0] Adds an instance with the provided name (if maxDelay = 0, periodic updates are disabled) hawkListBackends Lists the available Hawk backends hawkListInstances Lists the available Hawk instances hawkRemoveInstance <name> Removes an instance with the provided name, if it exists hawkSelectInstance <name> Selects the instance with the provided name hawkStartInstance <name> Starts the instance with the provided name hawkStopInstance <name> Stops the instance with the provided name hawkSyncInstance <name> [waitForSync:true|false] Requests an immediate sync on the instance with the provided name Managing metamodels \u00b6 Name Description hawkListMetamodels Lists all registered metamodels in this instance hawkRegisterMetamodel <files...> Registers one or more metamodels hawkUnregisterMetamodel <uri> Unregisters the metamodel with the specified URI Managing version control repositories \u00b6 Name Description hawkAddRepository <url> <type> [user] [pwd] Adds a repository hawkListFiles <url> [filepatterns...] Lists files within a repository hawkListRepositories Lists all registered metamodels in this instance hawkListRepositoryTypes Lists available repository types hawkRemoveRepository <url> Removes the repository with the specified URL hawkUpdateRepositoryCredentials <url> <user> <pwd> Changes the user/password used to monitor a repository Querying models \u00b6 Name Description hawkGetModel <repo> [filepatterns...] Returns all the model elements of the specified files within the repo hawkGetRoots <repo> [filepatterns...] Returns only the root model elements of the specified files within the repo hawkListQueryLanguages Lists all available query languages hawkQuery <query> <language> [repo] [files] Queries the index hawkResolveProxies <ids...> Retrieves model elements by ID Managing derived attributes \u00b6 Name Description hawkAddDerivedAttribute <mmURI> <mmType> <name> <type> <lang> <expr> [many|ordered|unique]* Adds a derived attribute hawkListDerivedAttributes Lists all available derived attributes hawkRemoveDerivedAttribute <mmURI> <mmType> <name> Removes a derived attribute, if it exists Managing indexed attributes \u00b6 Name Description hawkAddIndexedAttribute <mmURI> <mmType> <name> Adds an indexed attribute hawkListIndexedAttributes Lists all available indexed attributes hawkRemoveIndexedAttribute <mmURI> <mmType> <name> Removes an indexed attribute, if it exists Watching over changes in remote models \u00b6 Name Description hawkWatchModelChanges [default|temporary|durable] [client ID] [repo] [files...] Watches an Artemis message queue with detected model changes Users \u00b6 The Users API has its own set of commands, which can be listed through usersHelp : Name Description usersHelp Lists all the available commands for Users usersConnect <url> [username] [password] Connects to a Thrift endpoint usersDisconnect Disconnects from the current Thrift endpoint usersAdd <username> <realname> <isAdmin: true|false> [password] Adds the user to the database usersUpdateProfile <username> <realname> <isAdmin: true|false> Changes the personal information of a user usersUpdatePassword <username> [password] Changes the password of a user usersRemove <username> Removes a user usersCheck <username> [password] Validates credentials","title":"Console client"},{"location":"server/cli/#hawk","text":"You can use the hawkHelp command to list all the available commands.","title":"Hawk"},{"location":"server/cli/#connecting-to-hawk","text":"Name Description hawkConnect <url> [username] [password] Connects to a Thrift endpoint (guesses the protocol from the URL) hawkDisconnect Disconnects from the current Thrift endpoint","title":"Connecting to Hawk"},{"location":"server/cli/#managing-hawk-indexer-instances","text":"Name Description hawkAddInstance <name> <backend> [minDelay] [maxDelay|0] Adds an instance with the provided name (if maxDelay = 0, periodic updates are disabled) hawkListBackends Lists the available Hawk backends hawkListInstances Lists the available Hawk instances hawkRemoveInstance <name> Removes an instance with the provided name, if it exists hawkSelectInstance <name> Selects the instance with the provided name hawkStartInstance <name> Starts the instance with the provided name hawkStopInstance <name> Stops the instance with the provided name hawkSyncInstance <name> [waitForSync:true|false] Requests an immediate sync on the instance with the provided name","title":"Managing Hawk indexer instances"},{"location":"server/cli/#managing-metamodels","text":"Name Description hawkListMetamodels Lists all registered metamodels in this instance hawkRegisterMetamodel <files...> Registers one or more metamodels hawkUnregisterMetamodel <uri> Unregisters the metamodel with the specified URI","title":"Managing metamodels"},{"location":"server/cli/#managing-version-control-repositories","text":"Name Description hawkAddRepository <url> <type> [user] [pwd] Adds a repository hawkListFiles <url> [filepatterns...] Lists files within a repository hawkListRepositories Lists all registered metamodels in this instance hawkListRepositoryTypes Lists available repository types hawkRemoveRepository <url> Removes the repository with the specified URL hawkUpdateRepositoryCredentials <url> <user> <pwd> Changes the user/password used to monitor a repository","title":"Managing version control repositories"},{"location":"server/cli/#querying-models","text":"Name Description hawkGetModel <repo> [filepatterns...] Returns all the model elements of the specified files within the repo hawkGetRoots <repo> [filepatterns...] Returns only the root model elements of the specified files within the repo hawkListQueryLanguages Lists all available query languages hawkQuery <query> <language> [repo] [files] Queries the index hawkResolveProxies <ids...> Retrieves model elements by ID","title":"Querying models"},{"location":"server/cli/#managing-derived-attributes","text":"Name Description hawkAddDerivedAttribute <mmURI> <mmType> <name> <type> <lang> <expr> [many|ordered|unique]* Adds a derived attribute hawkListDerivedAttributes Lists all available derived attributes hawkRemoveDerivedAttribute <mmURI> <mmType> <name> Removes a derived attribute, if it exists","title":"Managing derived attributes"},{"location":"server/cli/#managing-indexed-attributes","text":"Name Description hawkAddIndexedAttribute <mmURI> <mmType> <name> Adds an indexed attribute hawkListIndexedAttributes Lists all available indexed attributes hawkRemoveIndexedAttribute <mmURI> <mmType> <name> Removes an indexed attribute, if it exists","title":"Managing indexed attributes"},{"location":"server/cli/#watching-over-changes-in-remote-models","text":"Name Description hawkWatchModelChanges [default|temporary|durable] [client ID] [repo] [files...] Watches an Artemis message queue with detected model changes","title":"Watching over changes in remote models"},{"location":"server/cli/#users","text":"The Users API has its own set of commands, which can be listed through usersHelp : Name Description usersHelp Lists all the available commands for Users usersConnect <url> [username] [password] Connects to a Thrift endpoint usersDisconnect Disconnects from the current Thrift endpoint usersAdd <username> <realname> <isAdmin: true|false> [password] Adds the user to the database usersUpdateProfile <username> <realname> <isAdmin: true|false> Changes the personal information of a user usersUpdatePassword <username> [password] Changes the password of a user usersRemove <username> Removes a user usersCheck <username> [password] Validates credentials","title":"Users"},{"location":"server/deployment/","text":"Initial setup \u00b6 To run the Hawk server, download the latest hawk-server-*.zip file for your operating system and architecture of choice from the \"Releases\" section on Github , and unpack it. Note that -nogpl- releases do not include GPL-licensed components: if you want them in your server, you will have to build it yourself. Make any relevant changes to the mondo-server.ini file, and then run the run-server.sh script from Linux, or simply the provided mondo-server binary from Mac or Windows. If everything goes well, you should see this message: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. osgi> You may now use the Thrift APIs as normal. If you need to make any tweaks, continue reading! .ini options \u00b6 You will notice that the .ini file has quite a few different options defined, in addition to the JVM options defined with -vmargs . We will analyze them in this section. -console allows us to use the OSGi console to manage Hawk instances. -consoleLog plugs Eclipse logging into the console, for following what is going with the server. -Dartemis.security.enabled=false disables the Shiro security realm for the embedded Artemis server. Production environments should set this to true . -Dhawk.artemis.host=localhost has Artemis listening only on 127.0.0.1. You should change this to the IP address or hostname of the network interface that you want Artemis to listen on. Alternatively, you can have Artemis listening in all addresses (see -Dhawk.artemis.listenAll below). -Dhawk.artemis.port=61616 has Artemis listening on port 61616 in the CORE and STOMP protocols. -Dhawk.artemis.listenAll=false prevents Artemis from listening on all addresses. You can set this to true and ignore hawk.artemis.host . -Dhawk.artemis.sslEnabled=false disables HTTPS on Artemis. If you enable SSL, you will need to check the \"Enabling HTTPS\" section further below! -Dhawk.tcp.port=2080 enables the TCP server for only the Hawk API, and not the Users management one. This API is unsecured, so do this at your own risk. For production environments, you should remove this line. -Dhawk.tcp.thriftProtocol=TUPLE changes the Thrift protocol (encoding) that should be used for the TCP endpoint. -Dorg.eclipse.equinox.http.jetty.customizer.class=org.hawk.service.server.gzip.Customizer is needed for the * -Dorg.osgi.service.http.port=8080 sets the HTTP port for the APIs to 8080. -Dorg.osgi.service.http.port.secure=8443 sets the HTTPS port for the APIs to 8443. -Dosgi.noShutdown=true is needed for the server to stay running. -Dsvnkit.library.gnome-keyring.enabled=false is required to work around a bug in the integration of the GNOME keyring in recent Eclipse releases. -eclipse.keyring and -eclipse.password are the paths to the keyring and keyring password files which store the VCS credentials Hawk needs to access password-protected SVN repositories. (For Git repositories, you are assumed to keep your own clone and do any periodic pulling yourself.) -XX:+UseG1GC (part of -vmargs ) improves garbage collection in OrientDB and Neo4j. -XX:+UseStringDeduplication (part of -vmargs as well) noticeably reduces memory use in OrientDB. Ports \u00b6 These are the default ports that a Hawk server uses: 2080: Hawk raw TCP API, available by default (unsecured: see above for how to disable it) 8080: Hawk HTTP API, available by default (optionally secured: see below) 8443: Hawk HTTPS API, if enabled (optionally secured: see below) 61616: Artemis push notifications for Hawk index status updates (optionally secured / encrypted: see below) Concerns for production environments \u00b6 One important detail for production environments is turning on security. This is disabled by default to help with testing and initial evaluations, but it can be enabled by running the server once, shutting it down and then editing the shiro.ini file appropriately (relevant sections include comments on what to do) and switching artemis.security.enabled to true in the mondo-server.ini file. The MONDO server uses an embedded MapDB database, which is managed through the Users [[Thrift API]]. Once security is enabled, all Thrift APIs and all external (not in-VM) Artemis connections become password-protected. If you enable security, you will want to ensure that -Dhawk.tcp.port is not present in the mondo-server.ini file, since the Hawk TCP port does not support security for the sake of raw performance. If you are deploying this across a network, you will need to edit the mondo-server.ini file and customize the hawk.artemis.host line to the host that you want the Artemis server to listen to. This should be the IP address or hostname of the MONDO server in the network, normally. The Thrift API uses this hostname as well in its replies to the watchModelChanges operation in the Hawk API. Additionally, if the server IP is dynamic but has a consistent DNS name (e.g. an Amazon VM + a dynamic DNS provider), we recommend setting hawk.artemis.listenAll to true (so the Artemis server will keep listening on all interfaces, even if the IP address changes) and using the DNS name for hawk.artemis.host instead of a literal IP address. Finally, production environments should enable and enforce SSL as well, since plain HTTP is insecure. The Linux products include a shell script that generates simple self-signed key/trust stores and indicates which Java system properties should be set on the server and the client. Secure storage of VCS credentials \u00b6 The server hosts a copy of the Hawk model indexer, which may need to access remote Git and Subversion repositories. To access password-protected repositories, the server will need to store the proper credentials in a secure way that will not expose them to other users in the same machine. To achieve this goal, the MONDO server uses the Eclipse secure storage facilities to save the password in an encrypted form. Users need to prepare the secure storage by following these two steps: The secure store must be placed in a place no other program will try to access concurrently. This can be done by editing the mondo-server.ini server configuration file and adding this: -eclipse.keyring /path/to/keyringfile That path should be only readable by the user running the server, for added security. An encryption password must be set. For Windows and Mac, the available OS integration should be enough. For Linux environments, two lines have to be added at the beginning of the mondo-server.ini file, specifying the path to a password file with: -eclipse.password /path/to/passwordfile. On Linux, creating a password file from 100 bytes of random data that is only readable by the current user can be done with these commands: $ head -c 100 /dev/random | base64 > /path/to/password $ chmod 400 /path/to/password The server tests on startup that the secure store has been set properly, warning users if encryption is not available and urging them to revise their setup. Setting up SSL certificates for the server \u00b6 SSL is handled through standard Java keystore ( .jks ) files. To produce a keystore with some self-signed certificates, you could use the generate-ssl-certs.sh script included in the Linux distribution, or run these commands from other operating systems (replace CN, OU and so forth with the appropriate values): keytool -genkey -keystore mondo-server-keystore.jks -storepass secureexample -keypass secureexample -dname \"CN=localhost, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -keyalg RSA keytool -export -keystore mondo-server-keystore.jks -file mondo-jks.cer -storepass secureexample keytool -import -keystore mondo-client-truststore.jks -file mondo-jks.cer -storepass secureexample -keypass secureexample -noprompt Once you have your .jks, on the client .ini you'll need to set: -Djavax.net.ssl.trustStore=path/to/client-truststore.jks -Djavax.net.ssl.trustStorePassword=secureexample On the server .ini, you'll need to enable SSL and tell Jetty and Artemis about your KeyStore: -Dorg.eclipse.equinox.http.jetty.https.enabled=true -Dhawk.artemis.sslEnabled=true -Dorg.eclipse.equinox.http.jetty.ssl.keystore=path/to/server-keystore.jks -Djavax.net.ssl.keyStore=path/to/server-keystore.jks You'll be prompted for the key store password three times: two by Jetty and once by the Artemis server. If you don't want these prompts, you could use these properties, but using them is UNSAFE , as another user in the same machine could retrieve these passwords from your process manager: -Djavax.net.ssl.keyStorePassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.keypassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.password=secureexample","title":"Deployment"},{"location":"server/deployment/#initial-setup","text":"To run the Hawk server, download the latest hawk-server-*.zip file for your operating system and architecture of choice from the \"Releases\" section on Github , and unpack it. Note that -nogpl- releases do not include GPL-licensed components: if you want them in your server, you will have to build it yourself. Make any relevant changes to the mondo-server.ini file, and then run the run-server.sh script from Linux, or simply the provided mondo-server binary from Mac or Windows. If everything goes well, you should see this message: Welcome to the Hawk Server! List available commands with 'hserverHelp'. Stop the server with 'shutdown' and then 'close'. osgi> You may now use the Thrift APIs as normal. If you need to make any tweaks, continue reading!","title":"Initial setup"},{"location":"server/deployment/#ini-options","text":"You will notice that the .ini file has quite a few different options defined, in addition to the JVM options defined with -vmargs . We will analyze them in this section. -console allows us to use the OSGi console to manage Hawk instances. -consoleLog plugs Eclipse logging into the console, for following what is going with the server. -Dartemis.security.enabled=false disables the Shiro security realm for the embedded Artemis server. Production environments should set this to true . -Dhawk.artemis.host=localhost has Artemis listening only on 127.0.0.1. You should change this to the IP address or hostname of the network interface that you want Artemis to listen on. Alternatively, you can have Artemis listening in all addresses (see -Dhawk.artemis.listenAll below). -Dhawk.artemis.port=61616 has Artemis listening on port 61616 in the CORE and STOMP protocols. -Dhawk.artemis.listenAll=false prevents Artemis from listening on all addresses. You can set this to true and ignore hawk.artemis.host . -Dhawk.artemis.sslEnabled=false disables HTTPS on Artemis. If you enable SSL, you will need to check the \"Enabling HTTPS\" section further below! -Dhawk.tcp.port=2080 enables the TCP server for only the Hawk API, and not the Users management one. This API is unsecured, so do this at your own risk. For production environments, you should remove this line. -Dhawk.tcp.thriftProtocol=TUPLE changes the Thrift protocol (encoding) that should be used for the TCP endpoint. -Dorg.eclipse.equinox.http.jetty.customizer.class=org.hawk.service.server.gzip.Customizer is needed for the * -Dorg.osgi.service.http.port=8080 sets the HTTP port for the APIs to 8080. -Dorg.osgi.service.http.port.secure=8443 sets the HTTPS port for the APIs to 8443. -Dosgi.noShutdown=true is needed for the server to stay running. -Dsvnkit.library.gnome-keyring.enabled=false is required to work around a bug in the integration of the GNOME keyring in recent Eclipse releases. -eclipse.keyring and -eclipse.password are the paths to the keyring and keyring password files which store the VCS credentials Hawk needs to access password-protected SVN repositories. (For Git repositories, you are assumed to keep your own clone and do any periodic pulling yourself.) -XX:+UseG1GC (part of -vmargs ) improves garbage collection in OrientDB and Neo4j. -XX:+UseStringDeduplication (part of -vmargs as well) noticeably reduces memory use in OrientDB.","title":".ini options"},{"location":"server/deployment/#ports","text":"These are the default ports that a Hawk server uses: 2080: Hawk raw TCP API, available by default (unsecured: see above for how to disable it) 8080: Hawk HTTP API, available by default (optionally secured: see below) 8443: Hawk HTTPS API, if enabled (optionally secured: see below) 61616: Artemis push notifications for Hawk index status updates (optionally secured / encrypted: see below)","title":"Ports"},{"location":"server/deployment/#concerns-for-production-environments","text":"One important detail for production environments is turning on security. This is disabled by default to help with testing and initial evaluations, but it can be enabled by running the server once, shutting it down and then editing the shiro.ini file appropriately (relevant sections include comments on what to do) and switching artemis.security.enabled to true in the mondo-server.ini file. The MONDO server uses an embedded MapDB database, which is managed through the Users [[Thrift API]]. Once security is enabled, all Thrift APIs and all external (not in-VM) Artemis connections become password-protected. If you enable security, you will want to ensure that -Dhawk.tcp.port is not present in the mondo-server.ini file, since the Hawk TCP port does not support security for the sake of raw performance. If you are deploying this across a network, you will need to edit the mondo-server.ini file and customize the hawk.artemis.host line to the host that you want the Artemis server to listen to. This should be the IP address or hostname of the MONDO server in the network, normally. The Thrift API uses this hostname as well in its replies to the watchModelChanges operation in the Hawk API. Additionally, if the server IP is dynamic but has a consistent DNS name (e.g. an Amazon VM + a dynamic DNS provider), we recommend setting hawk.artemis.listenAll to true (so the Artemis server will keep listening on all interfaces, even if the IP address changes) and using the DNS name for hawk.artemis.host instead of a literal IP address. Finally, production environments should enable and enforce SSL as well, since plain HTTP is insecure. The Linux products include a shell script that generates simple self-signed key/trust stores and indicates which Java system properties should be set on the server and the client.","title":"Concerns for production environments"},{"location":"server/deployment/#secure-storage-of-vcs-credentials","text":"The server hosts a copy of the Hawk model indexer, which may need to access remote Git and Subversion repositories. To access password-protected repositories, the server will need to store the proper credentials in a secure way that will not expose them to other users in the same machine. To achieve this goal, the MONDO server uses the Eclipse secure storage facilities to save the password in an encrypted form. Users need to prepare the secure storage by following these two steps: The secure store must be placed in a place no other program will try to access concurrently. This can be done by editing the mondo-server.ini server configuration file and adding this: -eclipse.keyring /path/to/keyringfile That path should be only readable by the user running the server, for added security. An encryption password must be set. For Windows and Mac, the available OS integration should be enough. For Linux environments, two lines have to be added at the beginning of the mondo-server.ini file, specifying the path to a password file with: -eclipse.password /path/to/passwordfile. On Linux, creating a password file from 100 bytes of random data that is only readable by the current user can be done with these commands: $ head -c 100 /dev/random | base64 > /path/to/password $ chmod 400 /path/to/password The server tests on startup that the secure store has been set properly, warning users if encryption is not available and urging them to revise their setup.","title":"Secure storage of VCS credentials"},{"location":"server/deployment/#setting-up-ssl-certificates-for-the-server","text":"SSL is handled through standard Java keystore ( .jks ) files. To produce a keystore with some self-signed certificates, you could use the generate-ssl-certs.sh script included in the Linux distribution, or run these commands from other operating systems (replace CN, OU and so forth with the appropriate values): keytool -genkey -keystore mondo-server-keystore.jks -storepass secureexample -keypass secureexample -dname \"CN=localhost, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -keyalg RSA keytool -export -keystore mondo-server-keystore.jks -file mondo-jks.cer -storepass secureexample keytool -import -keystore mondo-client-truststore.jks -file mondo-jks.cer -storepass secureexample -keypass secureexample -noprompt Once you have your .jks, on the client .ini you'll need to set: -Djavax.net.ssl.trustStore=path/to/client-truststore.jks -Djavax.net.ssl.trustStorePassword=secureexample On the server .ini, you'll need to enable SSL and tell Jetty and Artemis about your KeyStore: -Dorg.eclipse.equinox.http.jetty.https.enabled=true -Dhawk.artemis.sslEnabled=true -Dorg.eclipse.equinox.http.jetty.ssl.keystore=path/to/server-keystore.jks -Djavax.net.ssl.keyStore=path/to/server-keystore.jks You'll be prompted for the key store password three times: two by Jetty and once by the Artemis server. If you don't want these prompts, you could use these properties, but using them is UNSAFE , as another user in the same machine could retrieve these passwords from your process manager: -Djavax.net.ssl.keyStorePassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.keypassword=secureexample -Dorg.eclipse.equinox.http.jetty.ssl.password=secureexample","title":"Setting up SSL certificates for the server"},{"location":"server/eclipse/","text":"Hawk includes multiple optional features to integrate the Thrift APIs with regular Eclipse-based tooling: A custom Hawk instance type that operates over the Thrift API instead of locally. An EMF abstraction that allows for treating remote models as local ones. An editor for the .hawkmodel model access descriptors used by the above EMF resource abstraction. This page documents how these different features can be used. Managing remote Hawk indexers \u00b6 When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default LocalHawkFactory whose LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled. A factory can also be used to \"import\" instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this: The \"Thrift API integration for Hawk GUI\" feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to \"import\", the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API. The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide ). All these aspects are transparent to the user: the only difference is selecting the appropriate \"Instance type\" in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client. Editor for remote model access descriptors \u00b6 There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a .hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network. .hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model. To simplify the creation and maintenance of these .hawkmodel files, an Eclipse-based editor is provided in the \"Remote Hawk EMF Model UI Feature\". The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly. Main tab \u00b6 Here is a screenshot of the main tab: The main form-based tab is divided into three sections: The \"Instance\" section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. \"Instance name\" can be clicked to open a selection dialog with all the available instances. The \"Username\" and \"Password\" fields only need to be filled in if using the .hawkmodel file outside Eclipse. When using the .hawkmodel inside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed. The \"Contents\" section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded: By default, the entire index is retrieved (repository URL is '*', file pattern is '*' and no query is used). The \"Repository URL\", \"File pattern(s)\" and \"Query language\" labels can be clicked to open selection dialogs with the appropriate options. The default loading mode is \"GREEDY\" (send the entire contents of the model in one message), but various lazy loading modes are available. The contents of the index can be split over the different source files or not. While splitting by file is useful for browsing, some EMF-based tools may not be compatible with it. The \"Default namespaces\" field makes it possible to resolve ambiguous type names. For instance, both the IFC2x3 and the IFC4 metamodels have a type called IfcActor . Without this field, the query would need to specify which one of the two metamodels should be used on every reference to IfcActor , which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel. The \"Page size for initial load\" field can be set to a value other than 0, indicating that during the initial load of the model, its contents should not be sent in one response message, but rather divided into \"pages\" of a certain size. It was observed that a GREEDY loading mode with an adequate page size can be faster to load than a lazy loading mode, while still keeping server memory and bandwidth requirements under control. The \"Subscription\" section allows users to enable live updates in the opened model through the watchGraphChanges operation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided. Effective metamodel tab \u00b6 The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called \"slots\" by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types. The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the \"State\" column of the table, or through the buttons on the right of the table: \"Include all\" resets the entire table to the default state of implicitly including everything. \"Exclude all\" resets the entire table to excluding all metamodels. \"Exclude\" and \"Include\" only change the state of the currently selected element. \"Reset\" returns the currently selected element to the \"Default\" state. The effective metamodel is saved as part of the .hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows: A metamodel is included if it is \"Included\", or if it has the \"Default\" state and no metamodels are explicitly \"Included\". A type is included if it is not \"Excluded\" and its metamodel is included. A slot is included if it is not \"Excluded\" and its type is included.","title":"Eclipse client"},{"location":"server/eclipse/#managing-remote-hawk-indexers","text":"When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default LocalHawkFactory whose LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled. A factory can also be used to \"import\" instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this: The \"Thrift API integration for Hawk GUI\" feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to \"import\", the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API. The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide ). All these aspects are transparent to the user: the only difference is selecting the appropriate \"Instance type\" in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client.","title":"Managing remote Hawk indexers"},{"location":"server/eclipse/#editor-for-remote-model-access-descriptors","text":"There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a .hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network. .hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model. To simplify the creation and maintenance of these .hawkmodel files, an Eclipse-based editor is provided in the \"Remote Hawk EMF Model UI Feature\". The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly.","title":"Editor for remote model access descriptors"},{"location":"server/eclipse/#main-tab","text":"Here is a screenshot of the main tab: The main form-based tab is divided into three sections: The \"Instance\" section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. \"Instance name\" can be clicked to open a selection dialog with all the available instances. The \"Username\" and \"Password\" fields only need to be filled in if using the .hawkmodel file outside Eclipse. When using the .hawkmodel inside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed. The \"Contents\" section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded: By default, the entire index is retrieved (repository URL is '*', file pattern is '*' and no query is used). The \"Repository URL\", \"File pattern(s)\" and \"Query language\" labels can be clicked to open selection dialogs with the appropriate options. The default loading mode is \"GREEDY\" (send the entire contents of the model in one message), but various lazy loading modes are available. The contents of the index can be split over the different source files or not. While splitting by file is useful for browsing, some EMF-based tools may not be compatible with it. The \"Default namespaces\" field makes it possible to resolve ambiguous type names. For instance, both the IFC2x3 and the IFC4 metamodels have a type called IfcActor . Without this field, the query would need to specify which one of the two metamodels should be used on every reference to IfcActor , which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel. The \"Page size for initial load\" field can be set to a value other than 0, indicating that during the initial load of the model, its contents should not be sent in one response message, but rather divided into \"pages\" of a certain size. It was observed that a GREEDY loading mode with an adequate page size can be faster to load than a lazy loading mode, while still keeping server memory and bandwidth requirements under control. The \"Subscription\" section allows users to enable live updates in the opened model through the watchGraphChanges operation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided.","title":"Main tab"},{"location":"server/eclipse/#effective-metamodel-tab","text":"The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called \"slots\" by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types. The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the \"State\" column of the table, or through the buttons on the right of the table: \"Include all\" resets the entire table to the default state of implicitly including everything. \"Exclude all\" resets the entire table to excluding all metamodels. \"Exclude\" and \"Include\" only change the state of the currently selected element. \"Reset\" returns the currently selected element to the \"Default\" state. The effective metamodel is saved as part of the .hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows: A metamodel is included if it is \"Included\", or if it has the \"Default\" state and no metamodels are explicitly \"Included\". A type is included if it is not \"Excluded\" and its metamodel is included. A slot is included if it is not \"Excluded\" and its type is included.","title":"Effective metamodel tab"},{"location":"server/file-config/","text":"Hawk server includes an API to add Hawk instances that are used to index and query models. The configuration engine allows the server to create and configure Hawk instances as per user-created configuration files. The server should be ready to receive user queries upon startup without any interaction from user or clients. Upon startup, Hawk server reads and parses configuration files, and then it creates/updates hawk instances as per configuration files. NOTE: the Hawk server no longer writes to configuration files. If an instance configuration changes during operation, this configuration is persisted through the current HawkConfig mechanism. Configuration files will not overwrite any of the changed settings. The only exception is the polling min/max which will revert to config file settings if a server is restarted. Format \u00b6 Configuration files are XML files that define hawk instance name and its configuration. An XML schema can be found at HawkServerConfigurationSchema.xsd . A sample configuration file can be found at Sample Configuration File The XML should include the following elements: Table 1: List of XML elements in configuration file \u00b6 Element Name Parent Element Name multiplicity Value Description \u2018hawk\u2019 xml 1 None Root element \u2018delay\u2019 \u2018hawk\u2019 1 None Polling configuration \u2018plugins\u2019 \u2018hawk\u2019 0-1 None List of plugins (to be/that are) enabled \u2018plugin\u2019 \u2018plugins\u2019 0-* None Plugin name \u2018metamodels\u2019 \u2018hawk\u2019 0-1 None List of metamodels (to be/that are) registered \u2018metamodel\u2019 \u2018metamodels\u2019 0-* None Metamodel parameters \u2018repositories\u2019 \u2018hawk\u2019 0-1 None List of repositories (to be/that are) added \u2018repository\u2019 \u2018repositories\u2019 0-* None Repository parameters \u2018derivedAttributes\u2019 \u2018hawk\u2019 0-1 None List of derived attributes (to be/that are) added \u2018derivedAttribute\u2019 \u2018derivedAttributes\u2019 0-* None Derived attribute parameters \u2018derivation\u2019 \u2018derivedAttribute\u2019 0-1 None Derivation parameters \u2018logic\u2019 \u2018derivation\u2019 0-1 CDATA section An executable expression of the derivation logic in the language specified. \u2018indexedAttributes\u2019 \u2018hawk\u2019 0-1 None List of indexed attributes (to be/that are) added \u2018indexedAttribute\u2019 \u2018indexedAttributes\u2019 0-* None Indexed attribute parameters Table 2: \u2018hawk\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018hawk\u2019 \u2018name\u2019 Required String The unique name of the new Hawk instance \u2018backend\u2019 Required String The name of the backend to be used (e.g.org.hawk.orientdb.OrientDatabase, org.hawk.orientdb.RemoteOrientDatabase) Table 3: \u2018delay\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018delay\u2019 \u2018min\u2019 Required String Minimum delay between periodic synchronization in milliseconds \u2018max\u2019 Required String Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization) Table 4: \u2018plugin\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018plugin\u2019 \u2018name\u2019 Required String e.g. (org.hawk.modelio.exml.listeners.ModelioGraphChangeListener, org.hawk.modelio.exml.metamodel.ModelioMetaModelResourceFactory, org.hawk.modelio.exml.model.ModelioModelResourceFactory) Table 5: \u2018metamodel\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018metamodel\u2019 \u2018location\u2019 Optional String Location of metamodel file to be registered ~~\u2018uri\u2019~~ ~~Optional~~ ~~String~~ ~~Metamodel URI. This value is set automatically by server to list registered metamodels~~ Table 6: \u2018repository\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018repository\u2019 \u2018location\u2019 Required String Location of the repository \u2018type\u2019 Optional String The type of repository available repository types () \u2018user\u2019 Optional String Username for logging into the VCS \u2018pass\u2019 Optional String Password for logging into the VCS \u2018frozen\u2019 Optional String If the repository is frozen (true/false) Table 7: \u2018derivedAttribute\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018derivedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the derived attribute \u2018typeName\u2019 Required String The name of the type to which the derived attribute belongs \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the de- rived attribute belongs \u2018attributeType\u2019 Optional String The (primitive) type of the derived attribute \u2018isOrdered\u2019 Optional String A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true) \u2018isUnique\u2019 Optional String A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true) \u2018isMany\u2019 Optional String The multiplicity of the derived attribute Table 8: \u2018derivation\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018derivation\u2019 \u2018language\u2019 Required String The language used to express the derivation logic. Available labguages in Hawk: org.hawk.epsilon.emc.EOLQueryEngine, org.hawk.orientdb.query.OrientSQLQueryEngine, org.hawk.epsilon.emc.EPLQueryEngine Table 9: \u2018indexedAttribute\u2019 attributes \u00b6 Element Name Attribute name Optional/Required Type Description \u2018indexedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the indexed attribute. \u2018typeName\u2019 Required String The name of the type to which the indexed attribute \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the indexed attribute belongs. Location \u00b6 Configuration files are expected to be located in the \u2018configuration\u2019 folder in the server\u2019s home directory. Each Hawk instance should have its own configuration file. There are no rules on how the file should be named. It is a good practice to include hawk instance name in the file name for easy recognition. How to use/enable Hawk instance configuration engine \u00b6 You can follow this video tutorial , or alternatively follow these steps: Download the hawk-server-*.zip file for your operating system and architecture of choice from Hawk Server With Configuration ) Create a configuration file for each instance required to run in the Hawk server. Edit configuration files: Set instance name, backend, delay Add list of plugins to be enabled Add metamodel file to location to be registered Add repositories that are to be indexed Add any required derived attributes Add any required indexed attributes Save the configuration files to the \u2018configuration\u2019 folder in the server\u2019s home directory (see figure 1) Perform any other configuration that are required by Hawk Server and start the server (by following instructions at Deploying-and-running-the-server ) Check if the hawk instances are added and running by typing \u2018hawkListInstances\u2019 in the server\u2019s command terminal: Usage Notes \u00b6 Deleting configuration files from the directory will not delete instances from the server. However, the server will not start those instances. To test Hawk server with Measure Platform, refer to Using HawkQueryMeasure to query Hawk instance running in Hawk Server","title":"File-based configuration"},{"location":"server/file-config/#format","text":"Configuration files are XML files that define hawk instance name and its configuration. An XML schema can be found at HawkServerConfigurationSchema.xsd . A sample configuration file can be found at Sample Configuration File The XML should include the following elements:","title":"Format"},{"location":"server/file-config/#table-1-list-of-xml-elements-in-configuration-file","text":"Element Name Parent Element Name multiplicity Value Description \u2018hawk\u2019 xml 1 None Root element \u2018delay\u2019 \u2018hawk\u2019 1 None Polling configuration \u2018plugins\u2019 \u2018hawk\u2019 0-1 None List of plugins (to be/that are) enabled \u2018plugin\u2019 \u2018plugins\u2019 0-* None Plugin name \u2018metamodels\u2019 \u2018hawk\u2019 0-1 None List of metamodels (to be/that are) registered \u2018metamodel\u2019 \u2018metamodels\u2019 0-* None Metamodel parameters \u2018repositories\u2019 \u2018hawk\u2019 0-1 None List of repositories (to be/that are) added \u2018repository\u2019 \u2018repositories\u2019 0-* None Repository parameters \u2018derivedAttributes\u2019 \u2018hawk\u2019 0-1 None List of derived attributes (to be/that are) added \u2018derivedAttribute\u2019 \u2018derivedAttributes\u2019 0-* None Derived attribute parameters \u2018derivation\u2019 \u2018derivedAttribute\u2019 0-1 None Derivation parameters \u2018logic\u2019 \u2018derivation\u2019 0-1 CDATA section An executable expression of the derivation logic in the language specified. \u2018indexedAttributes\u2019 \u2018hawk\u2019 0-1 None List of indexed attributes (to be/that are) added \u2018indexedAttribute\u2019 \u2018indexedAttributes\u2019 0-* None Indexed attribute parameters","title":"Table 1: List of XML elements in configuration file"},{"location":"server/file-config/#table-2-hawk-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018hawk\u2019 \u2018name\u2019 Required String The unique name of the new Hawk instance \u2018backend\u2019 Required String The name of the backend to be used (e.g.org.hawk.orientdb.OrientDatabase, org.hawk.orientdb.RemoteOrientDatabase)","title":"Table 2:     \u2018hawk\u2019 attributes"},{"location":"server/file-config/#table-3-delay-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018delay\u2019 \u2018min\u2019 Required String Minimum delay between periodic synchronization in milliseconds \u2018max\u2019 Required String Maximum delay between periodic synchronization in milliseconds (0 to disable periodic synchronization)","title":"Table 3:     \u2018delay\u2019 attributes"},{"location":"server/file-config/#table-4-plugin-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018plugin\u2019 \u2018name\u2019 Required String e.g. (org.hawk.modelio.exml.listeners.ModelioGraphChangeListener, org.hawk.modelio.exml.metamodel.ModelioMetaModelResourceFactory, org.hawk.modelio.exml.model.ModelioModelResourceFactory)","title":"Table 4:     \u2018plugin\u2019 attributes"},{"location":"server/file-config/#table-5-metamodel-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018metamodel\u2019 \u2018location\u2019 Optional String Location of metamodel file to be registered ~~\u2018uri\u2019~~ ~~Optional~~ ~~String~~ ~~Metamodel URI. This value is set automatically by server to list registered metamodels~~","title":"Table 5:     \u2018metamodel\u2019 attributes"},{"location":"server/file-config/#table-6-repository-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018repository\u2019 \u2018location\u2019 Required String Location of the repository \u2018type\u2019 Optional String The type of repository available repository types () \u2018user\u2019 Optional String Username for logging into the VCS \u2018pass\u2019 Optional String Password for logging into the VCS \u2018frozen\u2019 Optional String If the repository is frozen (true/false)","title":"Table 6:     \u2018repository\u2019 attributes"},{"location":"server/file-config/#table-7-derivedattribute-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018derivedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the derived attribute \u2018typeName\u2019 Required String The name of the type to which the derived attribute belongs \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the de- rived attribute belongs \u2018attributeType\u2019 Optional String The (primitive) type of the derived attribute \u2018isOrdered\u2019 Optional String A flag specifying whether the order of the values of the derived attribute is significant (only makes sense when isMany=true) \u2018isUnique\u2019 Optional String A flag specifying whether the the values of the derived attribute are unique (only makes sense when isMany=true) \u2018isMany\u2019 Optional String The multiplicity of the derived attribute","title":"Table 7:    \u2018derivedAttribute\u2019 attributes"},{"location":"server/file-config/#table-8-derivation-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018derivation\u2019 \u2018language\u2019 Required String The language used to express the derivation logic. Available labguages in Hawk: org.hawk.epsilon.emc.EOLQueryEngine, org.hawk.orientdb.query.OrientSQLQueryEngine, org.hawk.epsilon.emc.EPLQueryEngine","title":"Table 8:     \u2018derivation\u2019 attributes"},{"location":"server/file-config/#table-9-indexedattribute-attributes","text":"Element Name Attribute name Optional/Required Type Description \u2018indexedAttribute\u2019 \u2018attributeName\u2019 Required String The name of the indexed attribute. \u2018typeName\u2019 Required String The name of the type to which the indexed attribute \u2018metamodelUri\u2019 Required String The URI of the metamodel to which the indexed attribute belongs.","title":"Table 9:     \u2018indexedAttribute\u2019 attributes"},{"location":"server/file-config/#location","text":"Configuration files are expected to be located in the \u2018configuration\u2019 folder in the server\u2019s home directory. Each Hawk instance should have its own configuration file. There are no rules on how the file should be named. It is a good practice to include hawk instance name in the file name for easy recognition.","title":"Location"},{"location":"server/file-config/#how-to-useenable-hawk-instance-configuration-engine","text":"You can follow this video tutorial , or alternatively follow these steps: Download the hawk-server-*.zip file for your operating system and architecture of choice from Hawk Server With Configuration ) Create a configuration file for each instance required to run in the Hawk server. Edit configuration files: Set instance name, backend, delay Add list of plugins to be enabled Add metamodel file to location to be registered Add repositories that are to be indexed Add any required derived attributes Add any required indexed attributes Save the configuration files to the \u2018configuration\u2019 folder in the server\u2019s home directory (see figure 1) Perform any other configuration that are required by Hawk Server and start the server (by following instructions at Deploying-and-running-the-server ) Check if the hawk instances are added and running by typing \u2018hawkListInstances\u2019 in the server\u2019s command terminal:","title":"How to use/enable Hawk instance configuration engine"},{"location":"server/file-config/#usage-notes","text":"Deleting configuration files from the directory will not delete instances from the server. However, the server will not start those instances. To test Hawk server with Measure Platform, refer to Using HawkQueryMeasure to query Hawk instance running in Hawk Server","title":"Usage Notes"},{"location":"server/logging/","text":"Logging in Hawk is done through the Logback library. The specific logback.xml file is part of the org.hawk.service.server.logback plugin fragment. If you need to edit it, it is located in the plugins/org.hawk.service.server.logback_<HAWK RELEASE> folder from the main directory of the server. A typical configuration with Hawk logging at the DEBUG level, with time-based rolling and all messages going to the hawk.log file would look as follows: <configuration> <appender name= \"STDOUT\" class= \"ch.qos.logback.core.ConsoleAppender\" > <layout class= \"ch.qos.logback.classic.PatternLayout\" > <Pattern> %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n </Pattern> </layout> </appender> <appender name= \"FILE\" class= \"ch.qos.logback.core.rolling.RollingFileAppender\" > <file> hawk.log </file> <encoder class= \"ch.qos.logback.classic.encoder.PatternLayoutEncoder\" > <Pattern> %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n </Pattern> </encoder> <rollingPolicy class= \"ch.qos.logback.core.rolling.TimeBasedRollingPolicy\" > <!-- rollover daily --> <fileNamePattern> mylog-%d{yyyy-MM-dd}.%i.txt </fileNamePattern> <maxHistory> 60 </maxHistory> </rollingPolicy> </appender> <logger name= \"org.eclipse.jetty\" level= \"warn\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <logger name= \"ch.qos.logback\" level= \"error\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <logger name= \"org.apache.shiro\" level= \"error\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> </logger> <!-- Change to \"error\" if Hawk produces too many messages for you --> <logger name= \"org.hawk\" level= \"debug\" additivity= \"false\" > <appender-ref ref= \"STDOUT\" /> <appender-ref ref= \"FILE\" /> </logger> <root level= \"debug\" > <appender-ref ref= \"STDOUT\" /> </root> </configuration>","title":"Logging"}]}
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 8b06762..3873b08 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -2,117 +2,117 @@
 <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
     <url>
@@ -137,7 +137,7 @@
     </url>
     <url>
      <loc>None</loc>
-     <lastmod>2020-12-01</lastmod>
+     <lastmod>2021-04-10</lastmod>
      <changefreq>daily</changefreq>
     </url>
 </urlset>
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 028ebf8..2a8517c 100644
--- a/sitemap.xml.gz
+++ b/sitemap.xml.gz
Binary files differ