Hawk includes multiple optional features to integrate the Thrift APIs with regular Eclipse-based tooling:
.hawkmodel model access descriptors used by the above EMF resource abstraction.This page documents how these different features can be used.
When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default LocalHawkFactory whose LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled.
A factory can also be used to “import” instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this:
The “Thrift API integration for Hawk GUI” feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to “import”, the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API.
The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide).
All these aspects are transparent to the user: the only difference is selecting the appropriate “Instance type” in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client.
There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a .hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network. .hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model.
To simplify the creation and maintenance of these .hawkmodel files, an Eclipse-based editor is provided in the “Remote Hawk EMF Model UI Feature”. The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly.
Here is a screenshot of the main tab:
The main form-based tab is divided into three sections:
The “Instance” section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. “Instance name” can be clicked to open a selection dialog with all the available instances.
The “Username” and “Password” fields only need to be filled in if using the .hawkmodel file outside Eclipse. When using the .hawkmodel inside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed.
The “Contents” section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded:
IfcActor. Without this field, the query would need to specify which one of the two metamodels should be used on every reference to IfcActor, which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel.The “Subscription” section allows users to enable live updates in the opened model through the watchGraphChanges operation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided.
The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called “slots” by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types.
The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the “State” column of the table, or through the buttons on the right of the table:
The effective metamodel is saved as part of the .hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows: