blob: 3698a3dae46b6ea97d483b919fdde75ac8297af0 [file] [log] [blame]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>Logical Model Integration in Eclipse</title>
<meta http-equiv="Content-Type"
content="text/html; charset=iso-8859-1">
</head>
<body>
<h1>Proposed Support for Logical Model Integration in Eclipse</h1>
<p>Version: 0.3</p>
<p>This document contains several sections which describe various aspects of the
proposed or potential support for improved logical model integration in the
Eclipse Platform (bug <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=37723">37723</a>).
The requirements document can be found <a href="../../source/logical-support.doc">here</a>
and a description of the initial worked performed in 3.1 is <a href="http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-vcm-home/docs/online/team3.1/logical-physical-mappings.html">here</a>.
The following list summarizes each item and provides a brief description of
the impact on the Eclipse Platform and it's clients.</p>
<ul>
<li><a href="#ProblemsView">Problems View</a>: The Problems view needs to be
more &quot;logical model&quot; aware. We are still planning on using IResource
based markers as the means of managing problems. The logical model integration
would affect how problems are presented and filtered. Some example features
being considered are: <br>
<ul>
<li>the ability to filter by selection of model elements that map to resources
via the ResourceMapping API</li>
<li>generic columns (e.g. Description, Resource, Path, Location) whose displayed
values would be model appropriate.</li>
<li>Problem type specific affordances
<ul>
<li>properties view for problem-type specific fields</li>
<li>Show In model view support</li>
<li>custom filters</li>
</ul>
</li>
</ul>
</li>
<p> The bulk of the work for this item is in the Problems view itself. Clients
would only need to do work if the current filtering was inadequate. The work
would involve defining model specific filters and properties display. For
JDT, work is probably not required as there is a strong enough relationship
between resources and java elements so model specific filters may not be required.</p>
<li><a href="#GenericNavigator">Common Navigator</a>: The Common Navigator is
being pushed down from WTP into the Platform. This should start happening
soon with the hope of having the initial contribution available for 3.2 M2.
The view allows model tooling to add a root node to the Navigator and control
what appears under that node. Client that wish to plug into the view will
need to provide a root element, content provider, label provider, sorter and
action set for inclusion in the navigator. Clients with existing Navigator
style views can decide whether to keep their view separate or integrate it
into the Common Navigator. For JDT, they will need to integrate with the view
to remain consistent with the Platform.<br>
<br>
</li>
<li><a href="#OperationParticipation">Maintaining Workspace Consistency</a>:
Eclipse has an operation mechanism in the LTK plugin which supports participation
in basic refactoring operations (delete, rename, etc). This is currently only
used to support participation in Java refactorings but it could also be used
for operations on other models. The work that would need to be done for this
is: <br>
<ul>
<li>Add support to retarget operations to the highest level model.</li>
<li>Describe how model tooling can make use of the LTK refactoring participant
support in order to allow dependant models to participate in refactorings.</li>
<li>Modifications may need to be made to the LTK framework itself to handle
the characteristics of multi-level models and possibly additional refactoring
operations.</li>
</ul>
<p>Support for retargetting would need to be added to the platform and to
any client that anticipates that higher level models could be built on top
of their model, including JDT. </p>
</li>
<li><a href="#ObjectContributionSupport">Team Operations on Model Elements</a>:
Provisional support for more than just a one-to-one mapping from resource
to model element was added in 3.1. The additional areas that need to be addressed
are: <br>
<ul>
<li>Supporting participation of higher level models on team operations performed
on lower level models. That is, a model element may consist of several
files that should always be operated on together. Hence an operation performed
in the Resource Navigator on one of these files should include all of
them.</li>
<li>Content providers can be used to make the view of a model differ from
the model structure. This means that team operations on the model elements
in these views will not match what the user sees. Addressing this issue
either requires restricting what content providers do in model views or
involving the content provider in the resource mapping determination process.</li>
</ul>
<p>The work items for this are:
<ul>
<li>A team participant extension point needs to be created.</li>
<li>Team providers need to consult the extension when performing operations
on resource mappings. Compare will also need to make use of this mechanism
for performing local history compare and replace operations.</li>
<li>Model tooling should participate in operations on appropriate elements
of lower level models.</li>
</ul>
<p>There is little work anticipated here for JDT since their model in similar
enough to the file model. Model tooling with models that hide the file structure
will need to provide a team participant.</p>
</li>
<li><a href="#DecoratorSupport">Team Decorator Support</a>: Team decorations
should appear on any model element on which team operations may be performed.
This requires the following: <br>
<ul>
<li>Improved adaptability of enablement rules for decorators. This has been
done and is available in 3.2 M1.</li>
<li>Change state determination for model elements contained in files. Model
tooling can decide at what depth they want to perform repository operations
in their models and need to provide change determination for sub-file
elements.</li>
<li>Label update propagation in hierarchical model views: This requires
change to the decorator mechanism and to views which decorate elements
(i.e. changes to the JFace viewers and custom model tooling viewers that
handle label updating in custom ways).</li>
</ul>
<p>JDT has a custom viewer that handles label updates so they will need to
adapt any new mechanism for performing label update propagation.</p>
</li>
<li><a href="#ModelLevelMerging">Model Level Merging</a>: Support for model
merges can be broken up into two features: <br>
<ul>
<li>Support for head-less merging: This will require additional API and
support in the Compare component and an extension point to allow model
tooling to plug in a head-less merger. </li>
<li>Support for manual merge: Compare will also need to provide API which
allows model tooling to plug in UI based merged tools that can operate
on their model elements. </li>
</ul>
<p>Compare already has the above mentioned support for models that have a
one-to-one mapping between files and model elements so only clients who
have more complicated mappings would need to provide this additional support.
For JDT, there should not be much work here since they already provide a
file-based content viewer. Compare will need to make use of any new API
in the local history operations.</p>
</li>
<li><a href="#DisplayingLogicalModels">Model Display in Team Operations</a>:
There are two types of displays that are relevant in team operations: <br>
<ul>
<li>A model element synchronization state view similar to that of the synchronize
view. Model tooling would need to build the model hierarchy they wish
to display in this view.</li>
<li>A model specific view that can be used in team operation dialogs. This
has a lot of similarities with the requirements of the Common Navigator.</li>
</ul>
<p>The bulk of the work for clients here will be providing the synchronization
view. This includes JDT as it would be beneficial to see a structure in
the synchronize view that matches what appears in the Packages Explorer.</p>
</li>
<li><a href="#Bootstrapping">Remote Discovery</a>: There are two aspects to
this: browsing models in the repository and transferring models to the local
workspace. The bulk of the work for this item would be for the repository
tooling as they would have to provide the remote browsing capabilities.The
model tooling will have to do some work but the amount will depend on how
the remote state of the model is represented.</li>
</ul>
<p>After presenting proposals for each of these area, we discuss the potential
role of <a href="#EMF">EMF</a> and present some generic <a href="#TeamScenarios">Team
Scenarios</a> that describe how the functionality we are proposing would play
out.</p>
<h2><a name="ProblemsView"></a>Problems View</h2>
<p>Making the problems view more logical model aware has been broken into several
pieces, as described in the following sections.</p>
<h3>Filtering</h3>
<p>How do we improve the usability of filters in the problems view. Work has started
on this is 3.2 and there are several bug reports on the issue (<a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=108013">108013</a>,
<a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=108015">108015</a>, <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=108016">108016</a>).
The basic ideas are: </p>
<ul>
<li>allow the user or a plugin to define filters that can be combined</li>
<li>provide a means of enabling filters quickly (i.e. without opening the filter
dialog).</li>
<li>filter based on model element selected and not just resource or resource
mapping (i.e. a file could contain several model objects and selecting one
of them should only shows markers associated with that particular element.</li>
</ul>
<p>An additional requirement identified by clients is the ability to filter on
model specific information. We will need to collect some concrete scenarios
on this to better understand the requirement.</p>
<h3>Properties</h3>
<p>Each problem type has different relevant properties. Java errors has a file
path and line number. Other models may have other ways of describing a problem
(e.g. a resource description and a field name). Ideally each problem would display
it's relevant properties. However, the Problems view often contains many different
types of problems, each of which may have different relevant properties. Table
widgets have a single set of columns, leading to the following possibilities:</p>
<ul>
<li>A generic set of properties that all problems have are displayed in the
table.</li>
<li>The table has columns for the relevant properties of all problem types it
contains and the columns are empty for problems that do not have that property.
This may be acceptable for a small number of types but becomes unmanageable
if there are a large number of types with different relevant properties.</li>
<li>The table is constrained to show only a single problem type at a time (or
a small number of different types to keep the number of columns reasonable).</li>
</ul>
<p>Given that users may want to see different problem types in the Problems view
at the same time, the most practical approach is to provide a generic set of
columns (e.g. Severity, Description, Element, Path, Location) and allow the
problem type to dictate what values appear in the columns.</p>
<h3>Problem Type Specific Behavior</h3>
<p>The Problems view currently supports custom Quick Fixes for a problem type.
Another useful feature would be the ability to navigate to model specific view.
There is currently a Show In Navigator which could be enhanced to support showing
the affected model element in a model view (e.g. Show in Packages Explorer for
Java problems).</p>
<h3>Perspective/Context/Activity</h3>
<p>If there was some way of determining what role the user was paying at a particular
time, it would be possible to tailor views to that particular task. Such information
could be used to enable particular Problems view filters.</p>
<h2><a name="GenericNavigator"></a>Common Navigator</h2>
<p>The Common Navigator is being pushed down from WTP into the Platform. The view
allows model tooling to add a root node to the Navigator and control what appears
under that node. Clients that wish to plug into the view will need to provide
a root element, content provider, label provider, sorter and action set for
inclusion in the navigator. Clients with existing Navigator style views can
decide whether to keep their view separate or integrate it into the Common Navigator.
For JDT, they will probably want to integrate with the view to remain consistent
with the Platform.</p>
<p>One aspect of the Common Navigator that is of particular interest to team operations
is the ability to obtain a content provider that can show a logical model in
a tree viewer. This would allow logical models to appear in team operation views
and dialogs. The Common Navigator proposal defines an extension that provides
this capability. The class for this extension is the <code>NavigatorContentExtension</code>
and it provides the following:</p>
<ul>
<li>A content provider the view can use to display the model tree</li>
<li>A label provider the view can use for getting the icon and text for individual
model elements</li>
<li>An action provider so the view can present model specific actions in appropriate
places (e.g. the context menu)</li>
<li>A comparator for sorting model elements</li>
</ul>
<p>For this API to be useable in Team operations, the <code>NavigatorContentExtension</code>
contributed by the model must have access to the context of the team operation.
Outside the context of a team operation, the content extension only has the
local workspace from which to build its model tree. However, within the context
of a team operation, there may be additional resources involved, specifically,
resources that exist remotely but not locally (i.e. outgoing deletions or incoming
additions). The model's content extension would need access to a team context
so that these additional resources could be considered when displaying the model.
</p>
<p>The following list summarizes the requirements that would be placed on a <code>NavigatorContentExtension</code>
when being used to display the model in a team context.</p>
<ul>
<li>the ability to show model elements that represent the remote state of a
model (e.g. incoming changes that exist remotely but not locally).</li>
<li>the ability to decorate model elements with there synchronization direction
(incoming, outgoing or conflicting) and change type (addition, deletion or
change). </li>
<li>the ability to filter the view based on repository provider state for the
underlying files (e.g. only show outgoing changes).</li>
</ul>
<p>Support for this can either be integrated with the Common Navigator API or
made available as Team specific API (see <a href="#DisplayingLogicalModels">Model
Display in Team Operations</a>). Our preference would be integrate the team
requirements with the Common Navigator requirements so that model providers
only need to implement one API. In the rest of this section we will address
the following two questions:</p>
<ul>
<li>How is the team context communication to a<code> NavigatorContentExtension</code>?</li>
<li>What does the team context look like?</li>
</ul>
<p>The next two sections propose answers to these questions.</p>
<h3>Configuring a NavigatorContentExtension with a Team Context</h3>
<p>In the Common Navigator API description that was available at the time of writing,
the <code>NavigatorContentExtension</code> is instantiated for each viewer and
has the ability to have state of the viewer available to it. In the context
of a team operation, the team provider would create the viewer that will be
used to display the model tree. It could also associate the team context with
the viewer so it was available to the context extension.</p>
<p>A team operation requires the ability to obtain a content provider that can
consider the team context when it builds a model tree. Since the tree is built
by the content provider, the following method to <code>NavigatorContentExtension</code>
will need to consult the viewer state to see if a team context is available.</p>
<blockquote>
<p><code>ITreeContentProvider getContentProvider()</code></p>
</blockquote>
<p>where <code>ISynchronizationContext</code> is the interface that defines the
team context. The model would be responsible for displaying a model tree that
included relevant model objects that may not exist remotely but as part of the
team operation.</p>
<p>In addition, the ability to decorate model elements with their team state is
required. Adding the following method to <code>NavigatorContentExtension</code>
would provide this capability:</p>
<blockquote>
<p><code>ICommonLabelDecorator getLabelDecorator()</code></p>
</blockquote>
<p>The provided decorator would need to consult the team context that is available
from the viewer state in order to determine the proper decorations for each
model element.</p>
<p>The other remaining requirement is filtering based on team state. Filtering
is not as well defined in the Common Navigator proposal but a similar approach
as described for the other two requirements could also be used to provide a
filter that filters on team state.</p>
<p>The above is more to provide an idea of what is required instead of the exact
solution. The final solution will depend on what the final shape of the Common
Navigator.</p>
<h3>The Team Context API</h3>
<p>The <code>ISynchronizationContext</code> API below could be used to provide
the team context to a model provider. It makes use of the following API pieces:</p>
<ul>
<li>A <code>SyncInfo</code> contains a description of the synchronization state
of a file system resource. The synchronization state includes a direction
(incoming, outgoing or conflicting) and change type (addition, deletion or
change). </li>
<li>The <code>SyncInfoTree</code> contains a description of all of the resources
that are out-of-sync. </li>
<li>The scope (<code>ISynchronizeScope</code>) defines the input used to scope
the synchronization. It has a set of root resources and a containment check
to define whether a resource that is a child of one of the roots is contained
in the scope. Particular subclasses may provide additional information (e.g.
the set of resource mappings that define the scope).</li>
</ul>
<p>The model provider can use this information to determine what model tree to
build, the synchronization state of model elements and what additional elements
need to be displayed.</p>
<pre>/**
* Allows a model provider to build a view of their model that includes
* synchronization information with a remote location (usually a repository).
*
* The scope of the context is defined when the context is created. The creator
* of the scope may affect changes on the scope which will result in property
* change events from the scope and may result in sync-info change events from
* the sync-info tree. Clients should note that it is possible that a change in
* the scope will result in new out-of-sync resources being covered by the scope
* but not result in a sync-info change event from the sync-info tree. This can
* occur because the set may already have contained the out-of-sync resource
* with the understanding that the client would have ignored it. Consequently,
* clients should listen to both sources in order to guarantee that they update
* any dependent state appropriately.
*
* This interface is not intended to be implemented by clients.
*
* @since 3.2
*/
public interface ISynchronizationContext {
/**
* Synchronization type constant that indicates that
* context is a two-way synchronization.
*/
public final static String TWO_WAY = "two-way"; //$NON-NLS-1$
/**
* Synchronization type constant that indicates that
* context is a three-way synchronization.
*/
public final static String THREE_WAY = "three-way"; //$NON-NLS-1$
/**
* Return the scope of this synchronization context. The scope determines
* the set of resources to which the context applies. Changes in the scope
* may result in changes to the sync-info available in the tree of this
* context.
*
* @return the set of mappings for which this context applies.
*/
public ISynchronizeScope getScope();
/**
* Return a tree that contains <code>SyncInfo</code> nodes for resources
* that are out-of-sync. The tree will contain sync-info for any out-of-sync
* resources that are within the scope of this context. The tree
* may include additional out-of-sync resources, which should be ignored by
* the client. Clients can test for inclusion using the method
* {@link ISynchronizeScope#contains(IResource)}.
*
* @return a tree that contains a <code>SyncInfo</code> node for any
* resources that are out-of-sync.
*/
public SyncInfoTree getSyncInfoTree();
/**
* Returns synchronization info for the given resource, or <code>null</code>
* if there is no synchronization info because the resource is not a
* candidate for synchronization.
*
* Note that sync info may be returned for non-existing or for resources
* which have no corresponding remote resource.
*
*
* This method will be quick. If synchronization calculation requires content from
* the server it must be cached when the context is created or refreshed. A client should
* call refresh before calling this method to ensure that the latest information
* is available for computing the sync state.
*
* @param resource the resource of interest
* @return sync info
* @throws CoreException
*/
public SyncInfo getSyncInfo(IResource resource) throws CoreException;
/**
* Return the synchronization type. A type of <code>TWO_WAY</code>
* indicates that the synchronization information (i.e.
* <code>SyncInfo</code>) associated with the context will also be
* two-way (i.e. there is only a remote but no base involved in the
* comparison used to determine the synchronization state of resources. A
* type of <code>THREE_WAY</code> indicates that the synchronization
* information will be three-way and include the local, base (or ancestor)
* and remote.
*
* @return the type of merge to take place
*
* @see org.eclipse.team.core.synchronize.SyncInfo
*/
public String getType();
/**
* Dispose of the synchronization context. This method should be
* invoked by clients when the context is no longer needed.
*/
public void dispose();
/**
* Refresh the context in order to update the sync-info to include the
* latest remote state. any changes will be reported through the change
* listeners registered with the sync-info tree of this context. Changes to
* the set may be triggered by a call to this method or by a refresh
* triggered by some other source.
*
* @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener)
* @see org.eclipse.team.core.synchronize.ISyncInfoTreeChangeEvent
*
* @param traversals the resource traversals which indicate which resources
* are to be refreshed
* @param flags additional refresh behavior. For instance, if
* <code>RemoteResourceMappingContext.FILE_CONTENTS_REQUIRED</code>
* is one of the flags, this indicates that the client will be
* accessing the contents of the files covered by the traversals.
* <code>NONE</code> should be used when no additional behavior
* is required
* @param monitor a progress monitor, or <code>null</code> if progress
* reporting is not desired
* @throws CoreException if the refresh fails. Reasons include:
* The server could not be contacted for some reason (e.g.
* the context in which the operation is being called must be
* short running). The status code will be
* SERVER_CONTACT_PROHIBITED.
*/
public void refresh(ResourceTraversal[] traversals, int flags, IProgressMonitor monitor) throws CoreException;
}</pre>
<p>&nbsp;</p>
<h2><a name="OperationParticipation"></a>Maintaining Workspace Consistency</h2>
<p>Model tools in Eclipse are typically layered. In the very least, there is the
model layer (e.g. Java) and the file-system layer (i.e. IResource). However,
in some cases, there may be more than two layers (e.g. J2EE&lt;-&gt;Java&lt;-&gt;IResource).</p>
<h3>Operation Participation</h3>
<p>There is already refactoring participant support in Eclipse which appears to
meet several of the requirements logical models have. The original proposal
for refactoring participation is described <a href="http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/jdt-ui-home/r3_0/proposals/refactoring/participants.html">here</a>.
The implementation does vary slightly from what is in the proposal but the proposal
is still a good description of the concepts involved.</p>
<p>Here is the summary of the features taken from the document:</p>
<ul>
<li>There is one generic action for the refactorings: rename, move, delete,
create and copy. This is stated in the proposal but has not yet been implemented.</li>
<li>Extensible refactorings are implemented using a processor/participant architecture.
Both processors and participants are contributed via XML. </li>
<li>There are three types of participants:
<ol>
<li>Participants that operate on the original element to be modified. </li>
<li>Participants that operate on elements derived from the original element.
</li>
<li>Participants that operate on the corresponding resource modifications.
</li>
</ol>
</li>
<li>Participants can't contribute to the UI, although this would be possible
for participants of category (1) if required. </li>
<li>There is no support to participate in changes done by a participant. Participants
can only participate in processor operations. </li>
<li>No support will be provided for composite refactorings. </li>
<li>Processors can override each other to provide a more specific processor
for more specific elements. Again, this has not yet been implemented.</li>
<li>The new refactoring UI is wizard/dialog based as the current Java refactoring
UI. The relationship between UI and processors is described in XML. Furthermore
special error and preview viewers can be contributed. </li>
<li>To support undo in an open refactoring architecture some enhanced undo support
from platform is needed.</li>
</ul>
<p>One possibility was to support participation in operations at all levels. That
is, JDT could participate in IResource level operations in order to react to
resource level changes. For instance, Java could participate in a *.java file
rename in the Resource Navigator and update any references appropriately (thus
treating the file rename as a Java compilation unit, or CU, rename). This would
lead to the following additional requirements:</p>
<ul>
<li>Any view that surfaces one of the basic operations will need to include
participants in the operation if there is any chance of another model needing
to control or participate in the change. For instance, the appropriate Resource
Navigator operations would need to be changed to include participants.</li>
<li>Higher level models (e.g. JDT) must participate in operations performed
on lower level models (e.g. files). As stated in the refactoring proposal,
operations on higher level model must also include participants from lower
level models. This must be done in such a way that a model layer does not
participate in the operation more than once.</li>
<li>Rendering in the refactoring preview would need to be model aware. That
is, the comparison view would need to show a test compare for text models
(such as Java) but a graphical compare for diagrams changes, etc.</li>
<li>Transitive model relationships must be handled. For example, if there is
a higher-level model that participates in class renames and the Java model
participates in a file rename and treats it as a class rename, than the higher-level
model participants should be included by the Java participant when a file
rename results in a class rename.</li>
</ul>
<p>Experiments were done by JDT in Eclipse 3.0 and the following observations
were made for a package rename vs. a folder rename in which Java participates:</p>
<ul>
<li>the generated deltas will not be the same. An example is renaming a folder
which is a source folder in the Java model. A folder participant that fixes
up the Java model triggers the following deltas:
<ul>
<li>a resource delta notifying about the folder rename.</li>
<li>a Java model delta notifying about the build path changes.</li>
<li>a Java model delta notifying about the source folder rename without
build path updates. This delta occurs since the Java model listens to
resource deltas and tries to map them back to Java model deltas.</li>
</ul>
</li>
<li>whereas a rename of the source folder triggers the following delta:
<ul>
<li>a Java model delta notifying about the source folder rename and the
build path update.</li>
<li>a resource delta notifying the folder rename.</li>
</ul>
Clients listening to Java model changes will expect the second delta on a
source folder rename. Getting the first deltas might result in some misreaction
of clients listening to deltas especially since they will receive two Java
model deltas.</li>
<li>the framework has to support participating in operations triggered by participants
(this is currently not supported in LTK). Consider the case of renaming a
*.java file. In this case the participant fixing up the references and the
type name of the CU has to load all participants interested in a type rename
since a WEB plug-in might want to fix up all references in JSP files.</li>
<li>higher level models can be in an inconsistent state when inspected by participants.
Again, consider the case of renaming a A.java file in the resource navigator.
If JDT reacts to this and wants to fix up the type name inside of A.java,
accessing A.java in the Java model would result in an exception since the
underlying file already got renamed, but the Java model doesn't know about
this since the resource delta hasn't broadcasted yet.</li>
</ul>
<p>The next section addresses these issues by combining operation retargeting
with participation in order to address these issues.</p>
<h3>Operation Retargeting</h3>
<p>To ensure that participants access models in a consistent state all operations
have to be executed on the highest level model and the operation has to describe
what happens in the lower level models to load corresponding lower level participants.
For example when renaming a CU the rename refactoring also loads participants
interested in file renames since a CU rename renames the file in the underlying
resource model. However the system should help the user to keep higher level
models consistent when manipulating lower level models. One approach would be
that the systems informs about those situations and allows the triggering of
the higher level operation instead. For example a rename of a *.java file in
the resource navigator could show a dialog telling the user that for model consistency
the file is better renamed using the Java Rename refactoring and if the users
wants to execute this action instead. Doing so has the other nice side effect
that models are not forced to use the LTK participant infrastructure. The way
how to participate could be left open for the plug-in providing the model operations.</p>
<p>One potential complication arises when multiple models want to &quot;own&quot;
or &quot;control&quot; a resource. This is less of an issue if one is a higher
level model built on top of a lower level one. For instance, a J2EE model may
override the Java model and assume ownership of any Java files that are J2EE
artifacts, such as EJBs. However, problems arise if the two models are peers.
For instance, there may be several models that are generated from a WSDL (web
services) descriptor file. The user may need to pick which model gets control
for operations performed directly on the resource.</p>
<p>Note that this feature area has a great deal of overlap with the <a href="http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-ui-home/R3_1/contributions-proposal/requestForComments.html">Improve
Action Contributions </a>work being proposed by the UI team.</p>
<h3>Operation Veto</h3>
<p>It is not clear the operation retargeting is desirable. That is, if a user
performs a delete on a file, it may be disconcerting if the delete is actually
performed on an EJB that consists of several files. An alternate approach is
to detect when an operation on a lower level model may have an effect on a higher
level model and ask the user to confirm that they really do want to perform
the operation on the lower level model.</p>
<h2><a name="ObjectContributionSupport"></a>Team Operations on Model Elements</h2>
<p>The support for having Team operations appear in the context menu of logical
elements is based on <code>ResourceMappings</code>. This support was available
as non-API in 3.1 and the is described in the <a href="http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-vcm-home/docs/online/team3.1/logical-physical-mappings.html">Support
Logical Resources - Resource Mappings</a> document. Here is a summary of what
is required for this:</p>
<ul>
<li>Repository tooling needs to be able to provide a <code>RemoteResourceMappingContext</code>
that gives the model access to the remote state and contents of the files
involved in the operation.</li>
<li>Model tooling needs to be able to determine which files are relevant for
its model elements given a <code>RemoteResourceMappingContext</code>. In many
cases, this is straight forward and doesn't require the context at all. In
others, the model may need to be able to query the file structure or file
contents from the context in order to determine which files need to be included.</li>
</ul>
<p>A <code>RemoteResourceMappingContext</code> is a means to allow the model to
see the state of the repository at a particular point in time. There are many
different terms used by different repository tools to identify this type of
view of the repository including version, branch, configuration, view, snapshot,
or baseline. The type of operation being performed dictates what files states
are accessible from the <code>RemoteResourceMappingContext</code>. For example,
when updating the local workspace to match the latest contents on the server,
the context would need to allow the client to access the latest contents for
remote files whose content differs from their local counterparts in order to
allow the model to determine if there are additional files that should be included
in the update. When committing, the context would need to provide the ancestor
state of any locally modified files so that the model could ascertain if there
are any outgoing deletions.</p>
<p>There are still some outstanding issues that need to be solved in this area.</p>
<ul>
<li>Team operations on model elements may need to include other model elements.
For instance, an operation on the plugin.xml file may need to include the
plugin.properties file as they are both part of the plug-in manifest object.
Also, operations on a sub-element of a file, will need to convey to the user
that all sub-elements of that file will be operated on.</li>
<li>Team operations have a life cycle that needs to be communicated to the model
provider. This will allow the model provider to efficiently manage the caching
of model state associated with a remote resource mapping context.</li>
<li>The model structure being displayed to the user may differ from the structure
of the model. This arises from the use of content providers in JFace which
allow a tree view to display a model in an arbitrary structure. A concrete
example of this is he Java Packages Explorer. It can display packages in either
flat or hierarchy mode. The flat mode matches the underlying model structure
whereas the hierarchy mode actually matches the file system structure (i.e.
packages are deep). Users will tend to associate the target of a team operation
as what is being displayed and not necessarily what the actual model structure
is.</li>
</ul>
<p>The following sections describe proposed solutions to these issues</p>
<h3><a name="TeamOperation"></a>Team Operation Input Determination</h3>
<p>In order to ensure that the proper resources are included as the input to a
team operation, we introduce the concept of a model provider. A model provider
has the following:</p>
<ul>
<li>An id that uniquely identifies the model provider</li>
<li>A set of lower-level model providers that the model provider extends. For
example, the Java model provider would extend the Resources model provider
while the J2EE model provider would extend both the Java and Resources model
provider.</li>
<li>An enablement rule that determines which resources the model provider applies
to.</li>
</ul>
<p>Model providers would be used in the following way to ensure that the proper
resources were included in a team operation.</p>
<ul>
<li>Model tooling would plug into the modelProvider extension point and provide
an enablement rule to match on resources of interest.</li>
<li>When a Team operation is performed, the repository plug-in uses the selection
to obtain all the applicable model providers.</li>
<li>The model providers are given a chance to add addition model elements, and
hence files, the the Team operation.</li>
<li>The operation is performed</li>
</ul>
<p>This mechanism can be used to ensure that operations performed directly on
files include all the files that constitute a model and also will ensure that
the effects can be displayed to the user in a form consistent with the higher
level models that are effected. This will be covered separately in the <a href="#DisplayingLogicalModels">Displaying
Model Elements in Team Operations</a> section.</p>
<h3>Team Operation Lifecycle</h3>
<p>Most Team operations have multiple steps. To illustrate this, consider an update
operation. The steps of the operation, considering the inclusion of resource
mappings and other facilities described in this proposal are:</p>
<ol>
<li>Determine the complete set of resource mappings that are potentially being
updated.</li>
<li>Display this to the user in a model specific way so they can confirm the
operation.</li>
<li>If the operation is confirmed, delegate the merge to the model so it can
attempt to automatically merge the model elements being updated using model
semantics to aid in the merging.</li>
<li>If automatic merge of one or more model elements is not possible, perform
a manual merge on these elements by obtaining appropriate compare editor inputs
from the model provider.</li>
</ol>
<p>Each of these steps may involve separate calls from the repository tooling
to the model tooling. The model would not want to recompute the remote model
state during each step but instead would rather cache any computed state until
the entire operation was completed. One means of supporting this is to add listener
support to the team context associated with the operation and have an event
fired when the operation using the context is completed). </p>
<h3>View Structure vs. Model Structure</h3>
<p>The contents of views in Eclipse are determined by a content provider. In most
cases, the structure of what is displayed matches the model structure but in
some cases it does not. One such example is the package explorer when it is
in hierarchy mode. In this mode, the children of a package are its files and
its subpackages. When the user performs an operation on a package in this view,
they may reasonably expect the operation to be performed on the package and
its subpackage. However, the package adapts to a resource mapping that only
includes the files and not the subpackages.</p>
<p>The simplest solution to this problem is to require that the content providers
wrap objects when they need adaptation to resource mappings and are displayed
in a way that does not match the model structure. In our Java example, this
would mean creating a new model object (e.g. DeepJavaPackage) whose children
were the java classes and subpackages. The advantage of this approach is that
the process of converting a model object to a resource mapping can be performed
by the model without any knowledge of the view configuration. Some of the concerns
of this approach are:</p>
<ul>
<li>Views that display different representations of a model would not be able
to do so by simply implementing a custom content provider. In essence, they
would need to create new model objects to get the display structure they desired.</li>
<li>The introduction of new model objects means that any menus need to handle
these new model objects. This is simplified somewhat by the support for adaptability
but is still an issue that needs to be addressed by the model tooling.</li>
</ul>
<p>Another solution to this problem would be to:</p>
<ul>
<li>Include the content provider of a view in the selection (i.e. the selection
include the context of where it originated).</li>
<li>Add a method to the content provider that will return a resource mapping
for an object that matches what the view is showing.</li>
<li>Add a method to content providers for getting a resource mapping for an
object.</li>
<li>Change the adaptation code in the IDE plug-in to check with the content
provider before adapting the resource directly</li>
</ul>
<p>The advantage of this approach is that model tooling can still use content
providers to provide alternate views of their model without wrapping model objects
or providing new model objects. The disadvantages are:</p>
<ul>
<li>The resource mapping determination process becomes more complicated.</li>
<li>Content providers would need to become more than just viewer configuration.
They would need to be included in the objectContributions for use in popup
menus and decorations and any other place where objects were adapted to resource
mappings.</li>
<li>There is no longer a one-to-one correspondence between a model object and
its resource mapping. In other words, clients could not assume tat they could
get the model object out of a resource mapping and then reobtain the same
resource mapping from that object.</li>
</ul>
<p>Given the complexity of the second solution, the first is preferable from an
implementation standpoint. However, we need to determine if clients can accept
this solution.</p>
<h2><a name="DecoratorSupport"></a>Team Decorator Support</h2>
<p>This section describes the support that is proposed to be added in Eclipse
3.2 to support the decoration of logical model elements. In Eclipse 3.1 and
prior, logical model elements could still be decorated. However, the only inter-model
adaptability support was for models whose elements had a one-to-one mapping
to file system resources (i.e. <code>IResource</code>). Here is a summary of
the issues that we are hoping to address in 3.2.</p>
<ol>
<li>General adaptability of lightweight decorators. In other words, an element
of one model can be adapted to the element of another for the purpose of determining
the label decorations for the original element. This is available in 3.2 M1
(see <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=86159">bug 86159</a>).</li>
<li>Additional support for the decoration of model elements that adapt to <code>ResourceMapping</code>.
ResourceMapping decoration makes use of the general adaptability mechanism
but also requires support for triggering label updates for any logical element
whose decoration depends on the state of one or more resources (see <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=86493">bug
86493</a>).</li>
<li>Support for the proper dirty decoration of elements that are contained in
a file but whose state depends on the contents of the file. An example of
this is Java methods in a file. If you make a change in a Java file that is
mapped to CVS, the file is decorated as dirty. It would be beneficial if the
dirty decoration could also be placed on those particular methods or class
members that are dirty. This is much more important for models where the user
may not be as directly aware of the relationship between a model element and
the file in which it is persisted.</li>
</ol>
<p>As stated above, point one has already been completed. The following sections
describe potential solutions to the remaining two problems. The first two sections
describe potential solutions using the existing architecture while the third
presents a unified solution that makes use of the team context described in
the Common Navigator section.</p>
<h3>Updating of Model Element Labels</h3>
<p>Some repository decorations are propagated to the root of any views that display
elements shared in the repository. This is done in order to provide useful information
to the user. For instance, the &quot;shared with CVS&quot; decoration (by default,
the <img src="version_controlled.gif" width="7" height="8"> icon) should appear
on any object on which a CVS operation can be performed. Similarly, the dirty
decoration (by default, a &quot;&gt;&quot; prefix) should appear on any views
items containing a dirty child in order to help the user find dirty items. For
the purpose of discussion, we will use dirty decoration when describing our
proposal but the same will hold true for other decorations that require propagation.</p>
<p>When a file becomes dirty, a label change must be issued for any items visible
to the user whose dirty state has change or that is a direct or indirect parent
of such an item. When we are dealing strictly with file system resources, this
is straight forward. When a file becomes dirty, a label change is issued for
the file and the folders and project containing the file. Any views that are
displaying these items will then update their labels. It is the responsibility
of models that have a one-to-one mapping from files to model elements to update
the labels of the corresponding model elements as well. For instance, JDT maps
the file, folder and project label changes to label changes on Java model elements
such as Compilation Units, Packages and Java Projects so that decorations in
the Packages Explorer get updated properly.</p>
<p>However, problems arise for logical models elements that do not have a one-to-one
mapping to file resources. For instance, consider a working set that contains
several project. The repository provider does not know that the working set
is being displayed to the user so does not issue a label update for it. The
view displaying the working set does not know when the state of the children
impact the label of the parent. It could try to fake it by updating the working
set label whenever the label of any children are updated but this could result
in many unnecessary and potentially costly updates.</p>
<p>The following points summarize the aspects of the problem that should be considered
when showing repository decorations in a model view:</p>
<ol>
<li>The repository deals with changes at the file system level so repository
decoration changes occur on files or folders.</li>
<li>The model views deal with decoration on model elements so label changes
are issued on model elements. A means to translate file changes to model element
changes is required.</li>
<li>The repository may have some decorated properties, such as the dirty state,
that are derived from the model view structure and not the file structure
(i.e. all parents of a dirty item in a view should be decorated as dirty).</li>
<li>Recalculating the label for a model element may be costly so, if possible,
label changes should only be issued if there is an actually change in the
state that determines the label.</li>
</ol>
<p>It is interesting to note that the requirement in point 2 can be solved using
the <a href="#TeamOperation">Team Operation Participation</a> mechanism described
previously. However, addressing the last two points will require additional
support. The next two sections describe a potential solution. It is useful to
note that any solution we come up with must consider the broader context of
the direction decoration support in Eclipse will go. We have tried to consider
this when drafting this proposal.</p>
<h4>Decoration Change Notification</h4>
<p>Currently, a decoration change is broadcast implicitly by issuing label change
events for the elements that need redecoration. From a repository tooling standpoint,
this means generating a label change on any changed file resources (and there
ancestor resources if the decorator that represents the changed state is propagated
to parents). It is then up to the model tooling to translate these label changes
of file resources to label changes on the appropriate model elements.</p>
<p>An alternative approach would be to make the decoration change notification
explicit. Thus, the repository tooling could issue a decoration change event
that contains the resources that needs redecoration. It would then be up to
any views that are displaying a repository decoration to update the label of
any elements appropriately. This would mean determining the set of elements
that correspond to the given resources.</p>
<p>As stated in point 4 above, a possible optimization is to only issue the label
change if the state of the decoration has changed. This can be accomplished
by including, as part of the change notification event, a property evaluator
that evaluates and caches the properties for each element it is provided and
indicates whether a change has occurred which requires the item to be redecorated.</p>
<h4>Decoration Propagation</h4>
<p>In the previous section we mentioned the possibility of having a property evaluator
that indicated whether a label change was required. This evaluator could also
indicate whether a reevaluation for the parent of the element is required. That
is, if the evaluator calculated that the dirty state of the element had changed,
it could indicate that the label update was required and that the evaluator
should be executed with the parent element as input in order to determine if
a label change was required for the parent and if the process should be repeated
for the parent element's parent.</p>
<p>This calculation could be long running. Thus, it should be performed in a background
job with minimal use of the UI thread. This may be a bit tricky as JFace viewers
are not threadsafe (i.e. they are mostly invoked from the UI thread). The current
JFace viewers persist the tree of elements in the SWT tree items so accessing
them needs to be run in the UI thread. Also, label changes need to be run in
the UI thread. These factors must be considered when designing a solution.</p>
<h3>Sub-File Level Dirty Decorations</h3>
<p>In this section we present API on <code>ResourceMapping</code> that supports
change determination on logical model elements. With this API, the algorithm
used by the decorator would be this:</p>
<ul>
<li>When asked to decorate a resource mapping, the team provider would invoke
the <code>getChangeState</code> method on the resource mapping to provide
a change state given a remote mapping context that does not allow contact
to the server.</li>
<li>The mapping will return the change state or <em>MAY_HAVE_DIFFERENCES</em>
if the change state cannot be determined without contacting the server.</li>
<li>If server contact is required, the decorator should use a background job
to query the change state. It can then cache this result and re-issue a label
update.</li>
</ul>
<p>In addition to the API on ResourceMapping, it would also be beneficial to provide
an abstract lightweight decorator that team providers can use to get the above
described behavior.</p>
<h4>ResourceMapping Changes</h4>
<p>Here are the proposed API additions to the <code>ResourceMapping</code> class.
Note that there would be additional API added to <code>ResourceMapping</code>
and <code>RemoteResourceMappingContext</code> to aid models in their calculation
of the change state.</p>
<pre>
public abstract class ResourceMapping {
/**
* Constant returned by <code>calculateChangeState</code> to indicate that
* the model object of this resource mapping does not differ from the
* corresponding object in the remote location.
*/
public static final int NO_DIFFERENCE = 0;
/**
* Constant returned by <code>calculateChangeState</code> to indicate that
* the model object of this resource mapping differs from the corresponding
* object in the remote location.
*/
public static final int HAS_DIFFERENCE = 1;
/**
* Constant returned by <code>calculateChangeState</code> to indicate that
* the model object of this resource mapping may differ from the
* corresponding object in the remote location. This is returned when
* getChangeState was not provided with a progress monitor and the remote
* state of the object was not cached.
*/
public static final int MAY_HAVE_DIFFERENCE = 2;
/**
* Calculate the change state of the local object when compared to it's
* remote representation. If server contact is required to properly
* calculate the state but is not allowed (as indicated by an exception with
* the code
* <code>RemoteResouceMappingContext.SERVER_CONTACT_PROHIBITED</code>),
* <code>MAY_HAVE_DIFFERENCE</code> should be returned. Otherwise
* <code>HAS_DIFFERENCE</code> or <code>NO_DIFFERENCE</code> should be
* returned as appropriate. Subclasses may override this method.
*
* It is assumed that, when <code>canContactServer</code> is
* <code>false</code>, the methods
* <code>RemoteResourceMappingContext#contentDiffers</code> and
* <code>RemoteResourceMappingContext#fetchMembers</code> of the context
* provided to this method can be called without contacting the server.
* Clients should ensure that this is how the context they provide behaves.
*
* @param context a resource mapping context
* @param monitor a progress monitor or <code>null</code>. If
* <code>null</code> is provided, the server will not be
* contacted and <code>MAY_HAVE_DIFFERENCE</code> will be
* returned if the change state could not be properly determined
* without contacting the server.
* @return the calculated change state of <code>HAS_DIFFERENCE</code> if
* the object differs, <code>NO_DIFFERENCE</code> if it does not
* or <code>MAY_HAVE_DIFFERENCE</code> if server contact is
* required to calculate the state.
* @throws CoreException
*/
public int calculateChangeState(
RemoteResourceMappingContext context,
IProgressMonitor monitor)
throws CoreException {
try {
int changeState = ...
return changeState;
} catch (CoreException e) {
if (e.getStatus().getCode() == RemoteResourceMappingContext.SERVER_CONTACT_PROHIBITED)
return MAY_HAVE_DIFFERENCE;
throw e;
}
}
}</pre>
<h3>Team Aware Model Views</h3>
<p>The complexities described in the previous sections arise because of the separation
of models and decorators. An alternate approach would be to use the team context
discussed in the <a href="#GenericNavigator">Common Navigator</a> section for
any model view. Such support would work something like this.</p>
<ul>
<li>The team provider would make one or more team contexts globally available.</li>
<li>Model tooling could allow the user to pick which context they wanted available
in a particular view.</li>
</ul>
<p>The details would be the same as those discussed in the Common Navigator section.
This would simplify the decorator update story as the view would then listen
to both resource deltas and team deltas and update model elements and labels
appropriately. The model will have enough information available from the tam
context to make the decisions about propagation in any way they deal appropriate.
The models will also be able to determine the change state of their model elements
for themselves so no additional API on <code>ResourceMapping</code> would be
required.</p>
<h2><a name="ModelLevelMerging"></a>Model Level Merging</h2>
<p>There are two types of merges that can take place: automatic and manual. Automatic
merges (or auto-merges) are merges the either do not contain file level conflicts
or whose file level conflicts can be resolved without user intervention. Manual
merges require the user to inspect the conflicting changes and decide how to
resolve them. In either case, involvement of the model in these two types of
merges is beneficial. For auto-merges, model knowledge can increase the likelihood
of a manual merge being possible and for manual merges, model involvement can
enhance how the merges are displayed and performed.</p>
<p>In this section we describe the API we propose to add to support model merging:</p>
<ul>
<li><code>IResourceMappingMerger</code>: an interface that model tooling implements
to allow repository tooling to perform head-less merges when possible on resource
mappings. The merge will also indicate when head-less merges are not possible.</li>
<li><code>IResourceMappingEditorInputFactory</code>: an interface that model
tooling implements to allow resource mappings to be merged manually.</li>
<li><code>MergeContext</code>: an API which allows the model tooling to interact
with the repository tooling in order to perform model level merges.</li>
</ul>
<p>Given a set of resource mappings, the repository tooling needs to be able to
obtain the model tooling support classes which will perform the merging. This
will require: </p>
<ul>
<li>A <code>getModelId</code> method on <code>ResourceMapping</code> to associate
a model id with each resource mapping.</li>
<li>An extension point for associating implementations of the above described
merge support classes with a model id.</li>
</ul>
<p>The steps for performing an optimistic merge would then look something like
this:</p>
<ul>
<li>A merge operation is performed on a set of resource mappings.</li>
<li>The Team participants are consulted to ensure that all the relevant resource
mappings are included in the operation.</li>
<li>The mappings are grouped by model id and the <code>IResourceMappingMerger</code>
is obtained and invoked for each group.</li>
<li>The results from the merges are accumulated including the list of mappings
that could not be auto-merged.</li>
<li>The failed mappings are again grouped by model id and the registered <code>IResourceMappingEditorInputFactory
</code>is used to obtain a set of editor inputs for these elements.</li>
<li>Editors are opened on these mappings so that they can be manually merged.</li>
</ul>
<p>When the model is asked to merge elements, either automatically or manually,
it will need access to the remote state of the model. API for this is also being
proposed. </p>
<h3>Auto-merging API</h3>
<p>In this section, we propose some API that will allow for model based auto-merging.
Before we do that, we should first mention that Eclipse has a pluggable <code>IStreamMerger</code>
(introduced in 3.0) for supporting model based merges when there is a one-to-one
based correspondence between a file and a model object. However, this is not
currently used by CVS (or any other repository provider as far as we know) but
this can be part of the solution we propose here.</p>
<p>The proposed API to support model level merges consists of the following:</p>
<ul>
<li><code>IResourceMappingMerger</code>: This is similar to the <code>IStreamMerger</code>
but is obtained from resource mappings and is provided a <code>MergeContext</code>
form which the model can obtain any ancestor and remote file contents that
it requires.</li>
<li><code>MergeContext</code>: Provides access to the ancestor and remote file
contents using <code>RemoteResourceMappingContexts</code> and also has helper
methods for performing file merges and for signaling the context that a file
has been merged so that the file can be marked up-to-date.</li>
</ul>
<h4>Resource Mapping Merger</h4>
<p>Below is what <code>IResourceMappingMerger</code> the would look like. It contains
a <code>merge</code> whose semantics differ depending on the type of the merge
context. A merge is performed for three-way synchronizations and the replace
occurs for two-way contexts. The model can determine which model elements need
to be merged by consulting the merge context which is presented in the next
section. </p>
<pre>/**
* The purpose of this interface is to provide support to clients (e.g.
* repository providers) for model level auto-merging. It is helpful in the
* cases where a file may contain multiple model elements or a model element
* consists of multiple files. It can also be used for cases where there is a
* one-to-one mapping between model elements and files, although
* <code>IStreamMerger</code> can also be used in that case.
*
* Clients should determine if a merger is available for a resource mapping
* using the adaptable mechanism as follows:
*
* Object o = mapping.getModelProvider().getAdapter(IResourceMappingMerger.class);
* if (o instanceof IResourceMappingMerger.class) {
* IResourceMappingMerger merger = (IResourceMappingMerger)o;
* ...
* }
*
* Clients should group mappings by model provider when performing merges.
* This will give the merge context an opportunity to perform the
* merges optimally.
*
* @see org.eclipse.compare.IStreamMerger
* @see org.eclipse.team.internal.ui.mapping.IResourceMappingManualMerger
* @since 3.2
*/
public interface IResourceMappingMerger {
/**
* Attempt to automatically merge the mappings of the merge context(<code>MergeContext#getMappings()</code>).
* The merge context provides access to the out-of-sync resources (<code>MergeContext#getSyncInfoTree()</code>)
* associated with the mappings to be merged. However, the set of resources
* may contain additional resources that are not part of the mappings being
* merged. Implementors of this interface should use the mappings to
* determine which resources to merge and what additional semantics can be
* used to attempt the merge.
*
* The type of merge to be performed depends on what is returned by the
* <code>MergeContext#getType()</code> method. If the type is
* <code>MergeContext.TWO_WAY</code> the merge will replace the local
* contents with the remote contents, ignoring any local changes. For
* <code>THREE_WAY</code>, the base is used to attempt to merge remote
* changes with local changes.
*
* Auto-merges should be performed for as many of the context's resource
* mappings as possible. If merging was not possible for one or more
* mappings, these mappings should be returned in an
* <code>MergeStatus</code> whose code is
* <code>MergeStatus.CONFLICTS</code> and which provides access to the
* mappings which could not be merged. Note that it is up to the model to
* decide whether it wants to break one of the provided resource mappings
* into several sub-mappings and attempt auto-merging at that level.
*
* @param mappings the set of resource mappings being merged
* @param mergeContext a context that provides access to the resources
* involved in the merge. The context must not be
* <code>null</code>.
* @param monitor a progress monitor
* @return a status indicating the results of the operation. A code of
* <code>MergeStatus.CONFLICTS</code> indicates that some or all
* of the resource mappings could not be merged. The mappings that
* were not merged are available using
* <code>MergeStatus#getConflictingMappings()</code>
* @throws CoreException if errors occurred
*/
public IStatus merge(IMergeContext mergeContext,
IProgressMonitor monitor) throws CoreException;
}</pre>
<p>It is interesting to note that partial merges are possible. In such a case,
the <code>merge</code> method must be sure to return a <code>MergeStatus</code>
that contains any resource mappings for which the merge failed. These mappings
could match some of the mappings passed in or could be mappings of sub-components
of the larger mapping for which the merge was attempted, at the discretion of
the implementer.</p>
<h4>Merge Context</h4>
<p>In order for repository tooling to support model level merging, they must be
able to provide an <code>IMergeContext</code>. The merge context provides:</p>
<ul>
<li>a team synchronization context (i.e. <code>IMergeContext</code> extends
the <code>ISynchronizationContext</code> introduced in the Common Navigator
section). </li>
<li> the ability to mark a file as merged. This is provided in order to allow
the model tooling to signal when they have completed the merge so that the
repository tooling can then update the synchronization meta-data of the file
so it can be checked-in or committed. </li>
<li>merge methods which allow the model tooling to delegate the merge of one
or more files back to the repository tooling (i.e. the model does not need
to do any special handling to aid the merge).</li>
<li>the scope of a merge context is a <code>ResourceMappingScope</code> that
provides access to the resource mappings involved in the merge.</li>
</ul>
<p>The following is the proposed API methods of the merge context.</p>
<pre>/**
* Provides the context for an <code>IResourceMappingMerger</code>
* or a model specific synchronization view that supports merging.
*
* TODO: Need to have a story for folder merging
*
* This interface is not intended to be implemented by clients.
*
* @see IResourceMappingMerger
* @since 3.2
*/
public interface IMergeContext extends ISynchronizationContext {
/**
* Method that allows the model merger to signal that the file in question
* has been completely merged. Model mergers can call this method if they
* have transferred all changes from a remote file to a local file and wish
* to signal that the merge is done.This will allow repository providers to
* update the synchronization state of the file to reflect that the file is
* up-to-date with the repository.
*
* Clients should not implement this interface but should instead subclass
* MergeContext.
*
* @see MergeContext
*
* @param file the file that has been merged
* @param monitor a progress monitor
* @return a status indicating the results of the operation
*/
public abstract IStatus markAsMerged(IFile file, IProgressMonitor monitor);
/**
* Method that can be called by the model merger to attempt a file-system
* level merge. This is useful for cases where the model merger does not
* need to do any special processing to perform the merge. By default, this
* method attempts to use an appropriate <code>IStreamMerger</code> to
* merge the files covered by the provided traversals. If a stream merger
* cannot be found, the text merger is used. If this behavior is not
* desired, sub-classes may override this method.
*
* This method does a best-effort attempt to merge all the files covered
* by the provided traversals. Files that could not be merged will be
* indicated in the returned status. If the status returned has the code
* <code>MergeStatus.CONFLICTS</code>, the list of failed files can be
* obtained by calling the <code>MergeStatus#getConflictingFiles()</code>
* method.
*
* Any resource changes triggered by this merge will be reported through the
* resource delta mechanism and the sync-info tree associated with this context.
*
* TODO: How do we handle folder removals generically?
*
* @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener)
* @see org.eclipse.core.resources.IWorkspace#addResourceChangeListener(IResourceChangeListener)
*
* @param infos
* the sync infos to be merged
* @param monitor
* a progress monitor
* @return a status indicating success or failure. A code of
* <code>MergeStatus.CONFLICTS</code> indicates that the file
* contain non-mergable conflicts and must be merged manually.
* @throws CoreException if an error occurs
*/
public IStatus merge(SyncInfoSet infos, IProgressMonitor monitor) throws CoreException;
/**
* Method that can be called by the model merger to attempt a file level
* merge. This is useful for cases where the model merger does not need to
* do any special processing to perform the merge. By default, this method
* attempts to use an appropriate <code>IStreamMerger</code> to perform the
* merge. If a stream merger cannot be found, the text merger is used. If this behavior
* is not desired, sub-classes may override this method.
*
* @param file the file to be merged
* @param monitor a progress monitor
* @return a status indicating success or failure. A code of
* <code>MergeStatus.CONFLICTS</code> indicates that the file contain
* non-mergable conflicts and must be merged manually.
* @see org.eclipse.team.ui.mapping.IMergeContext#merge(org.eclipse.core.resources.IFile, org.eclipse.core.runtime.IProgressMonitor)
*/
public IStatus merge(SyncInfo info, IProgressMonitor monitor);
}</pre>
<h3>Manual Merging</h3>
<p>Providing the capability to manually merge a set of model elements require
two things:</p>
<ul>
<li>the ability to display a high level view of the model </li>
<li>the ability to merge individual model elements manually</li>
</ul>
<p>The first requirement is met by the team context proposal outlined in the <a href="#GenericNavigator">Common
Navigator</a> section. The second can be met by giving such a view access to
the merge context discussed in the <a href="#ModelLevelMerging">Model Level
Merging</a> section. This context provides enough state and functionality to
display a two-way or three-way comparison and perform the merge.</p>
<h2><a name="DisplayingLogicalModels"></a>Displaying Model Elements in Team Operations</h2>
<p>There are two types of displays that a Team operation may need:</p>
<ul>
<li>Display the local model elements in a tree. For instance, a team operation
may need to indicate what set of local elements that will take part in a versioning
operation. This is similar to what the Common Navigator would require in the
sense that the team operation will need a content provider, label provider,
etc. for displaying logical model elements.</li>
<li>Display the synchronization state between the local model elements and their
remote counterparts. This is more complicated as it requires the display of
model elements that may not exists remotely and also requires that decorators
be calculated based on the comparison of the local, remote and possibly ancestor
file contents.</li>
</ul>
<p>Both these requirements are met by the team context proposal outlined in the
<a href="#GenericNavigator">Common Navigator</a> section.</p>
<h2><a name="Bootstrapping"></a>Remote Discovery</h2>
<p>There are two aspects to consider for this feature:</p>
<ul>
<li>How does the user see what is available in the repository? </li>
<li>How does the user transfer models from the repository to their local workspace?
</li>
</ul>
<p>In the following sections we outline some specific scenarios and describe what
would be required to support them.</p>
<h3>Remote Browsing</h3>
<p>Logical model browsing in the repository would need to be rooted at the project
as that is where the associations between the resources and the model providers
is persisted. This leads to the following two requirements:</p>
<ul>
<li>The repository must be able to provide the relevant slice of the repository
history for that project. That is, the repository must be able to provide
the file states and contents for a particular repository branch or version
(or some other means of getting a repository time slice). Another way of thinking
of this is that the repository must be able to provide a virtual file system
whose contents match what they would be if the branch or version being browsed
were loaded from the repository onto the local disk. This requirement is almost
identical to what is required to support a RemoteResourceMappingContext so,
if a repository is able to provide a context, it should be able to meet this
requirement. </li>
<li>The model provider must be able to interpret the contents of the remote
project in order to build the model corresponding to the contents of the files
in the project.</li>
<li>The remote model must be displayed in a view so the user can browse it.</li>
</ul>
<p>There are two options for providing the remote project contents to the model
provider.</p>
<ol>
<li>Provide a new API, potentially similar in form to the RemoteResourceMappingContext,
and require model providers to reimplement their model building code in terms
of this API.</li>
<li>Provide a means to present the remote project as an IProject to the model
provider. In this case, the model provider could reuse the code it currently
has for building its model.</li>
</ol>
<p>The second option is definitely preferable from a model provider standpoint
because of the potential to reuse existing code. There are, however, a few things
to consider:</p>
<ul>
<li>Providing the remote project as an <code>IProject</code> does allow the
reuse of model building code. However, that code may have been written with
the assumption that the file contents are all local. Having an <code>IProject</code>
that is a view of remote state may introduce some performance problems.</li>
<li>The model code could not assume that a local file system object (i.e. <code>java.io.File</code>)
could be obtained from an <code>IFile</code> using <code>getLocation().toFile()</code>.</li>
<li>The remote <code>IProject</code> will be read-only. The model code will
need to handle this. Ideally, this would be identified up from so the model
provider could indicate to the user which operations were not available. In
the absence of this, the model provider would need to fail gracefully on failed
writes.</li>
<li>It is possible that building the models requires builders to run and generate
output file. If this is the case, an appropriate place to put the output must
be determined. As mentioned in the previous point, the project would be read-only.
Special handling may be needed to allow the project builder to create build
output in a local cache associated with the project.</li>
<li>It is also possible that building the model requires information from related
projects. This would require either
<ul>
<li>a means of identifying those projects in such a way that they could
also be located and interpreted by the model.</li>
<li>a means of dealing with the lack of access to the referenced projects</li>
</ul>
</li>
<li>In essence, this could lead to the different versions of the same model
being built in a single workspace. It is equivalent to checking out different
versions of the same project into a single workspace. This may cause problems
for model providers that have a single model for the whole workspace. PDE
is an example of a model provider that does this.</li>
<li>Having the project as the point of interpretation implies that the entire
project needs to be interpreted, even if the user only wants to view a single
element. </li>
</ul>
<p>Several of the issues mentioned above would benefit from having an explicit
distinction between projects that are remote views of a project state and those
that are locally loaded in order to perform modifications. </p>
<h3>Comparing Remote Versions</h3>
<p>When browsing a remote logical model version, the user may then want to compare
what they see with another version. If the browsing is done using a remote <code>IProject</code>,
then the comparison is no different that if it were performed between a local
copy of the model and a remote resource mapping context.</p>
<h3>Viewing Logical Model Element History</h3>
<p>The user want to see the change history of a particular model element. In order
to do that, we need the following.</p>
<ul>
<li>The repository must be able to produce an ordered list of time slices (what
is a time slice. Is there a better term?)</li>
<li>The model must interpret each time slice in order to determine if the element
of interest change</li>
<li>The model provider must display a history of changes for the model element
and be able to open the state of the model at each point in its history for
the purposes of display or comparison. The list of changes could simply be
a list of timestamps indicating when the model element was changed (similar
to what is shown in the local history) although it would be reasonable to
expect additional information such as the commit comment associated with the
change to also appear.</li>
</ul>
<p>The above is straight forward if there is a one-to-one mapping between files
and model elements. Repositories can typically provide the history for a particular
file efficiently. The model could then interpret each file revision as a model
change (i.e. the model provider could show a list of model element changes using
the timestamp of file changes as the timestamps for the model element changes).
If a user opened a particular change, the model provider would then load and
interpret the contents of the file in order to display it in an appropriate
way.</p>
<p>In the case where there are multiple model objects in a single file (many-to-one),
the model provider would need to interpret the contents of the file in order
to determine if the model element of interest actually changed in any particular
file change. This could result in potentially many file content fetches in a
way for which repository tooling is not optimized (i.e. repository tooling is
optimized to give you a time slice not to retrieve all the revisions of the
same file). One way to deal with this would be to have the model provider use
the file history as the change history for the model element with the understanding
that the element may not have changed between entries. Another possibility would
be to do the computation once and cache the result (i.e. the points at which
each element in the file changed) to be used in the future. As new revisions
of a file are released, the cache could be updated to contain the latest change
history. This would only need to consider the newest revision as he history
doesn't change. It may even be possible to share this history description in
the project.</p>
<p>The final case to consider is when a model element spans multiple files (one-to-many).
If the files that make up a model element never change, then it is simply case
of looking at the history of each file involved and building the element history
from that. However, it becomes more complicated if the number or location of
the files that make up a model element can change. The calculation of the change
history can then become quite expensive depending on how the files that make
up a model element are determined. For example, determining what files make
up a model element may require the contents of one of more files to be read.
Thus, you end up in the same situation as the many-to-one case. The same solutions
proposed for that case could also be used here.</p>
<p>The one-to-many case is interesting for another reason. Different repositories
provide different types of history. For instance, CVS only provides file based
history. In the UI, the history based CVS operations are only available for
files but not for folders or projects. That's not to say that the history couldn't
be obtained for a folder or project, it is just that it can be expensive to
determine (i.e. would require transferring large amounts of information from
the server). Other repositories could potentially provide higher level histories.
For instance, Subversion treats commits atomically so the history of a project
could be determined by obtaining the set of all commits that intersected with
the project.</p>
<p>This is important because supporting Team operations on logical model elements
blurs the distinction between files and folders. That is, logical model elements
adapt to ResourceMappings which could be a part of a file, a complete file,
a set of files, a single folder, a set of folders, etc. The question is whether
the ability to see the history of a model element should be available for all
model elements or for only some.</p>
<ul>
<li>One possibility would be to allow the repository tooling to decide, based
on the structure of a resource mapping, whether change history should be available.
For instance, CVS may only make history available for model elements that
adapt to one file (or a small number of files). It then becomes a repository
restriction as to what model elements history is available on.</li>
<li>Another is to have the repository tooling provide the ability for the model
tooling to calculate the history of arbitrarily large model elements with
the understanding that it might take a long time. The model tooling would
need to consider ways to improve the efficiency of how changes are determined
for this to be practical.</li>
<li>A hybrid approach would be to have the model tooling decide which elements
should have history based on a capability description of the repository. This
difficult part here is defining the capability description.</li>
<li>Another hybrid approach would be to have the model flag those elements for
which it must have history. The repository would do the best it could for
those and for others would decide on a case by case basis which should have
history. </li>
</ul>
<p>Supporting history on arbitrary model elements requires the repository to be
able to produce a time slice for each interesting change. This may be possible
for some repositories, such as Subversion since it supports atomic commits.
However, for others like CVS, there is no built in way to determine all the
files that belong to a single commit. This could potentially be deduced by looking
at a set of file histories and grouping the changes by timestamp but this would
be a prohibitively expensive operation. Another possibility would be to present
a reduced set of time slices based on version tags but this has it's own potential
failings (i.e. tags are done at the file level as well so there are no guarantees
that a tag represents the complete time slice of a project).</p>
<h3>Loading Logical Models</h3>
<p>Ideally, users would be able to browse their model structure in the repository
and pick those items which they wish to transfer to their workspace (i.e. checkout).
In Eclipse, projects are the unit of transfer between the repository and the
local workspace. This has the following implications:</p>
<ul>
<li>Multi-project models (i.e. models that span projects) would require additional
support to identify the relationship between the projects.</li>
<li>Partial project loading (i.e. the ability to load a subset of the model
elements persisted in a project) would require additional repository tooling
support.</li>
</ul>
<p>The majority of the work here would need to be done by the repository tooling.
That is, they would need to provide remote browsing capabilities and support
partial project loading if appropriate. The ability to support cross-project
references would also need additional API in Team that allowed these relationships
to be stated in such a way that they could be shared and converted back to a
project.</p>
<h2><a name="EMF"></a>The Potential Role of EMF</h2>
<p>Although not part of the Platform, it is worth while to mention the potential
role of EMF in a lot of the areas touched by this proposal. For EMF models,
much of the required implementation could be done at the EMF level thus simplifying
what models would need to do. Some possibilities are:</p>
<ul>
<li> An EMF Resource Mapping that provides the mapping of the model object to
files and also could provide the change determination code. </li>
<li>At least partial implementations of the synchronize, compare and merge API</li>
</ul>
<p>The following sections mention some of the issues we've come across when prototyping
using EMF.</p>
<h3>Identifying Model Objects</h3>
<p>One of the requirements for supporting team operations on logical models is
to be able to identify and compare model elements. By default, EMF uses object
identity to indicate that two model elements are the same element. This works
when you only have one copy of the model. However, for team operations, there
can be multiple copies of the model (i.e. local, ancestor and remote). EMF does
support the use of GUIDs (i.e. when XMI is used) but it is not the default.</p>
<p>This brings rise to another issue. Team operations can involve up to 3 copies
of a model. Putting and keeping all 3 models in memory has performance implications.
A means of identifying a model element without requiring that the entire model
be loaded would be helpful. </p>
<h3>IAdaptable</h3>
<p>Another issue is that EMF objects do not implement <code>IAdaptable</code>
but any objects that adapts to a <code>ResourceMapping</code> must. One solution
would be to have <code>EObject</code> implement IAdaptable but this is not possible
since EObject cannot have dependencies on Eclipse. This means that the owner
of the model must ensure that each of their model objects that adapt to <code>ResourceMapping</code>
implement <code>IAdaptable</code> and their <code>getAdapter</code> method match
that found in <code>org.eclipse.runtime.PlatformObject</code>. Another option
is to remove the assumption made by clients that only objects that implement
<code>IAdaptable</code> can be adapted. This is tricky since anyone can be a
client of the adaptable mechanism. We can ensure that the SDK gets updated but
can make no guarantees about other clients.</p>
<h2><a name="TeamScenarios"></a>Team Scenarios</h2>
<p>In this section, we describe what some Team scenarios might look like with
the logical model integration enhancement we have discussed in previous sections.
We will describe the scenarios in terms of CVS. </p>
<h3>Updating the Workspace</h3>
<p>In this scenario, the user selects one or more model elements and chooses Team&gt;Update.
Currently what happens is each file that is updated will get it's new contents
from the server. For files that have both local and remote modifications, the
server attempts a clean merge but if that is not possible, the file will end
up containing CVS specific markup identifying the conflicting sections. For
binary files, no merge is attempted. Instead, the old file is moved and the
new file downloaded. In both these cases, it is the users responsibility to
resolve the conflicts by editing the file in order to remove any obsolete lines
and the CVS conflict markup or decide which version of the binary file to keep,
respectively. It should be noted that this &quot;after-the-fact&quot; conflict
resolution will not be acceptable for many higher level models.</p>
<p>The goal of a Team&gt;Update is to do an auto-merge if possible and only involve
the user if there are conflicts that need to be resolved. For operations in
the file model space, this can be done on a file by file basis. That is, an
auto-merge can be attempted on each file individually and only those files for
which the auto-merge is not possible would require user intervention. This should
be fairly straight forward to implement for CVS. The <code>IStreamMerger</code>
interface that was added in Eclipse 3.0 can be used to determine whether an
auto-merge is possible and perform the merge if it is. The files for which an
auto-merge is not possible could then be displayed in a dialog, compare editor
or even the sync view in order to allow the user to resolve any conflicts.</p>
<p>It is not clear that this file-by-file approach would be adequate for merges
involving higher level model elements. The reason for this is that it is possible
for a model element to span files. Auto-merging one of those files while leaving
another unmerged may corrupt the model of disk. The decision about when auto-merge
is possible and when it is not can only be made by the model tooling. Therefore,
some portion of the merge will need to be delegated to the model. </p>
<p>There are several sub-scenarios to consider:</p>
<ul>
<li><strong>Update of one or more files, folder or projects</strong>: the update
should only happen at the file level if there are no models participating
in Team operations on those files. If there are participants, then you will
end up in one of the next two scenarios depending on whether their are multiple
participants that match the selected files, folders or projects.</li>
<li><strong>Update of one or more model elements, all from the same model</strong>:
If the model has registered an <code>IResourceMappingMerger</code> with the
platform, then the merge of the model elements belonging to that model will
be delegated to the merger. The model merger will attempt an auto-merge at
the model level thus ensuring that the model on local disk is not corrupted.
If an auto-merge of one or more elements, is not possible, these will be returned
back to the Team operation for user intervention. The mechanics of this are
described in more detail below.</li>
<li><strong>Update of one or more model elements from different models</strong>:
In this scenario, each model would be given a chance to merge their model
elements in sequence. This is not really different than the previous case
except that it is conceivable that the merges made by one model may have a
negative effect on another model before it has a chance to perform it's merges.
This may be acceptable as it is hard to conceptualize how two independent
model providers could co-exist peacefully with that kind of overlap. </li>
</ul>
<h4>Auto-Merging</h4>
<p>When updating a model element, it may be possible that the merge is possible
at the model level. In other words, if an <code>IResourceMappingMerger</code>
is available for one or more resource mappings, the merge can be performed by
the model without ever dropping down to a lower level merge (e.g. file level
merge). This makes the assumption that the model doing the merge will not do
anything that corrupts lower level models. However, it does not ensure that
higher level models will not be corrupted. Hence, ideally, the Team operation
would still need to check for participants at the model level to ensure that
higher level models in order to include other resource mappings in the merge
if required.</p>
<p>If no model level merge is available, the update will need to be performed
at the file level. This means that participants at the file level must be queried
for additional resource mappings and then the merges can then be performed on
these files using the appropriate <code>IStreamMerger</code>.</p>
<p><strong>Manual Merging</strong></p>
<p>Model objects that cannot be merged automatically need to be merged manually.
There are two main pieces required to create a UI to allow the user to perform
the manual merge:</p>
<ul>
<li>A view (most likely a tree) that displays the model elements to be merged.
Ideally, this view should show the synchronization state of each element and
should also show the relationship between the files and model elements being
merged. It may be possible to extend the Team synchronization framework to
provide this functionality.</li>
<li>From each separately mergable element, it should be possible to open a compare
editor that supports the merging of that model element. It is not necessary
for each visible element to be mergable as some elements may act simply as
organizational elements. These elements would appear near the top of the tree.
Lower level elements (leaves but possibly others) should be openable (although
the merge editor that is opened may contain other elements if that was how
the model deemed that they should be merged at the same time.</li>
</ul>
<p>Both of these pieces must be available given a set of resource mappings. The
adaptable mechanism should be adequate to provide these in whatever form they
take. If either are absent, the manual merges can still be performed at the
file level.</p>
<h3>Committing or Checking In Changes</h3>
<p>For repositories, check-ins or commits happen at the file level. Here are some
considerations when supporting commits on logical model elements.</p>
<ul>
<li>The user must be made aware of any additional model elements that will be
included in the commit due to there relationship to the files being committed.
For example committing an element that is persisted in a file with other elements
will also result in the committing of all the other elements in that file.</li>
<li>The elements being committed should be displayed in a model appropriate
way.</li>
<li>Appropriate feedback should be provided to the user if the commit cannot
proceed to to conflicts.</li>
</ul>
<p>Ideally, what the user would like to see is all the files and model elements
being committed arranged in such a way that the relationships between them are
obvious. If additional elements to those that were originally selected are included
in the commit, these should be highlighted in some manner.</p>
<h3>Tagging Elements</h3>
<p>Tagging in repositories happens at the file level and, at least in CVS, can
only be applied to content that is in the repository. This leads to the following
two considerations when tagging:</p>
<ul>
<li>The user needs to be made aware of any outgoing changes that will not be
tagged.</li>
<li>Tagging a model element that is persisted in a file with other model elements
may also result in those elements being tagged (i.e. the tag is applied to
the file and, hence, any elements contained in the file.</li>
</ul>
<p>The above two points really require two different views. The first is much
like the view used for committing where the user sees any outgoing changes but
this time with a message indicating that it is the ancestor state of these elements
that will be tagged. The second is just a model based view that highlights those
elements that will be tagged but were not in the original selection.</p>
<h3>Replacing Elements</h3>
<p>Replacing is similar to Update but is not as complicated as the local changes
are discarded and replaced by the remote contents (i.e. no merging is required).
However, there are the following considerations:</p>
<ul>
<li>The user needs to be made aware of any outgoing changes that will be lost.</li>
<li>Replacing a model element that is persisted in a file with other model elements
may also affect those elements.</li>
</ul>
<p>The requirements here are similar to tagging except that the determination
of additional elements is based on what the incoming changes are and, hence
could be displayed in a synchronization type view. There are similarities with
update in the sense that the existence of an <code>IResourceMappingMerger</code>
may mean that extra elements need not be affected at all.</p>
<p>As with Update, Replacing could be performed at the model level if the model
has an associated <code>IResourceMappingMerger</code>. The mechanics would be
similar to Update except that no manual merge phase would be required. Also,
the model merger would either need a separate method (<code>replace</code>)
or a flag on the <code>merge</code> method (<code>ignoreLocalChanges</code>)
to indicate that a replace was occurring. When performing a replace, the ancestor
context is not required.</p>
<h3>Synchronizing and Comparing</h3>
<p>The ability to provide model support in the synchronize view would be a natural
byproduct of several of the requirements discussed above. To summarize, what
would be required is:</p>
<ul>
<li>The ability to display the synchronization state of model elements.</li>
<li>The ability to trigger auto-merges on selected model elements.</li>
<li>The ability to open a compare editor in order to perform a manual merge.</li>
</ul>
<p>These are all included as requirements for previously mentioned operations.
The only additional requirement for Synchronize view integration is that the
synchronization state display must keep itself up-to-date with local file system
changes and remote changes. The synchronize view already has infrastructure
for this at the file level which the model provider could use to ensure that
the model elements in the view were kept up-to-date.</p>
<h2>Summary of Requirements</h2>
<p>This section presents the requirements on various parties for this proposal.
The parties we consider are the Eclipse Platform, Model Providers and Repository
Providers.</p>
<h3>Eclipse Platform</h3>
<p>The 3.2 Eclipse Platform release schedule is:</p>
<ul>
<li>Friday Aug. 12, 2005 - Milestone 1 (3.2 M1) - stable build </li>
<li>Friday Sep. 23, 2005 - Milestone 2 (3.2 M2) - stable build </li>
<li>Friday Nov. 4, 2005 - Milestone 3 (3.2 M3) - stable build </li>
<li>Friday Dec. 16, 2005 - Milestone 4 (3.2 M4) - stable build </li>
</ul>
<p>The Platform work for the items is this proposal and their target availably
dates are:</p>
<ul>
<li>Model Aware Problems View
<ul>
<li>Improve filtering (available in 3.2 M2)</li>
<li>Generic problem fields (3.2 M3: bug <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=111058">111058</a>)</li>
<li>Model aware selection filter (3.2 M3: bug <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=111054">111054</a>)</li>
<li>Problem type specific Show in (3.2. M3: bug <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=111057">111057</a>)</li>
<li>Problem type specific details pane (investigate for 3.2 M3 bug <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=111059">111059</a>)</li>
<li>Context aware filtering (investigate in 3.2 M3 and target 3.2 M4)</li>
<li>Model specific filters (investigate in 3.2 M3 and target 3.2 M4)</li>
</ul>
</li>
<li>Context Awareness
<ul>
<li>Perspective/window level control over view filtering (investigate in
3.2 M3 and target 3.2 M4)</li>
<li>Out-of-context operation participation (investigate in 3.2 M3 and target
3.2 M4)</li>
</ul>
</li>
<li>Common Navigator
<ul>
<li>Accept contribution from WTP (should happen in 3.2 M3)</li>
<li>Team Contexts (gated by Common Navigator: target 3.2 M4)
<ul>
<li>Provide helper classes to ease implementation and use of the API</li>
</ul>
</li>
</ul>
</li>
<li>Model Level Merging
<ul>
<li>Define the API (3.2 M3)</li>
<li>Provide CVS implementation (3.2 M3)</li>
<li>Provide helper classes to ease implementation and use of the API (3.2
M4) </li>
</ul>
</li>
<li>Remote Discovery (investigate in 3.2 M3, too early in investigation to commit
to a target date)</li>
</ul>
<p>Target dates are given for all items but this may be subject to change, especially
for those items currently under investigation. For Remote Discovery, we are
too early in our investigation to commit to a delivery date.</p>
<h3>Model Providers</h3>
<p>The model providers will need to do the following work to make full use of
the support outlined in this proposal.</p>
<ul>
<li>Adapt model elements to <code>ResourceMapping</code>.</li>
<li>Define a <code>ModelProvider</code> for determining team operation participation.</li>
<li>Provide an <code>IResourceMappingMerger</code> for performing model level
merges</li>
<li>Provide a <code>NavigatorContentExtension</code> for the Common Navigator.
<ul>
<li>include support for team contexts
<ul>
<li>dynamic update based on resource and team changes</li>
<li>decoration using resource synchronization states</li>
</ul>
</li>
</ul>
</li>
<li>Provide context specific configuration information to configure views that
appear in the model provider's perspective (the nature of which is to be determined).
Some potential configuration points are:
<ul>
<li>Problems view filters</li>
<li>Navigator filters</li>
<li>Operation participation/veto</li>
</ul>
</li>
</ul>
<p>The model can chose whether to provide any, all or some of the above facilities.
If they do not, then a suitable resource-based default implementation will be
used.</p>
<h3>Repository Providers</h3>
<p>Repository providers will need to provide the following:</p>
<ul>
<li><code>RemoteResourceMappingContext</code> that allows the model to view
the ancestor or remote state of the repository.</li>
<li><code>ISynchronizationContext</code> that allows the model to query the
synchronization state of the local resources with respect to the ancestor
and remote repository state.</li>
<li><code>IMergeContext</code> which supports programmatic invocation of merge
operations</li>
<li>Remote Discovery API the nature of which has yet to be determined</li>
</ul>
<p>The repository provider can decide which model provider facilities to make
use and only using a subset may reduce the amount of work a repository provider
must do. However, to achieve rich integration requires the repository provider
to implement everything.</p>
<h2>Open Issues</h2>
<p>Here are some open issues and questions</p>
<ul>
<li>One of the issues presented in this document is that of model structure
vs. tree view structure. Often times these are the same but in some instances
they differ due to the use of content providers. When they differ, should
team operations and item decoration correspond to the model structure or the
view structure? I think it is safe to say that which ever approach is deemed
more suitable, the same approach should be used for both decoration and team
operations so that a consistent message is shown to the user. That is, clicking
on a dirty item in a view and choosing to commit or check-in the item should
always result in something being checked-in.</li>
<li>A related issue is that of view filter. If a view is filtering out particular
elements, should a Team operation on a parent of that element include the
element or not? Currently it does. That is, if you filter elements out of
the Navigator or Packages Explorer they will still be included in Team operations
on their parent. Whatever the chosen approach is, it should be applied to
both Team operations and decorations.</li>
<li>Another question that remains is whether repository providers and model
providers can implement these interfaces. We are confident that we can make
this work for CVS but other repository providers are still a question mark.
The Platform lacks any complex models so it is hard for us to judge how hard
this would be to do for such models.</li>
</ul>
<h3>Assumptions and Limitations</h3>
<p>Here are some assumptions we have made or limitations that may exist.</p>
<ul>
<li>We have attempted to address the issue of model overlap by allowing participants
to include additional resource mappings in a tam operation. However, this
mechanism will only ensure that the additional resource mappings are included.
The presentation and operation on these separate models will happen independently.
If there are implicit dependencies between these models, it is possible that
an operation on one will corrupt the other. The proposed solutions currently
do not address this issue.</li>
<li>Currently, resource mappings are provided by the model objects (using the
adapter mechanism). This assumes that all the elements in the same view with
that element come from the same model. It is conceivable that a view could
be showing elements from multiple models. This is not addressed by this proposal.</li>
</ul>
<h2>Change History</h2>
<p>Changes in Version 0.3</p>
<ul>
<li>Expanded Common Navigator section to include description of the relationship
between the context of a team operation and the Navigator content extension.</li>
<li>Added Operation Veto subsection to Maintaining Workspace Consistency section.</li>
<li>Changed Team Operation Input Determination section to use of Model Providers
instead of Team Participants.</li>
<li>Added Team Aware Model Views subsection of the Decorations section which
describes an alternate approach to providing team decorations.</li>
<li>Updated the Model Level Merging section to incorporate the team context
concept.</li>
<li>Updated the Displaying Model Elements in Team Operations sections to reflect
that these requirements can be met using team contexts.</li>
<li>Added Summary of Requirements section</li>
</ul>
<p>Changes in Version 0.2</p>
<ul>
<li>Added Scenario descriptions to Remote Discovery section.</li>
<li>Reworded EMF section to more accurately reflect the issues.</li>
<li>Added reference and link to the <a href="http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-ui-home/R3_1/contributions-proposal/requestForComments.html">Improve
Object Contributions</a> proposal from the Maintaining Workspace Consistency
section.</li>
<li>Added statement of repository/model requirements to Team Operations on Model
Elements section.</li>
<li>Added explicit lifecycle description to Team Operations section</li>
</ul>
<p>&nbsp;</p>
<p>&nbsp;</p>
</body>
</html>