-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 59 KB
/
index.json
1
[{"categories":null,"contents":"At the center of EMF Cloud, there is a model management component, which provides consistent model access, as well as an interface for manipulating models and listening to model changes across various editors, views, and components interacting with a model. This model management component is called Model Hub (Typescript).\n The Model Hub is an extensible central model management component, which offers a common API to all clients for accessing and interacting with models. As such, the model hub provides plenty of generic functionalities, such as dirty state handling of models, undo/redo, notification mechanisms for various events, etc. It supports clients living in the frontend (browser), as well as clients that are running in a backend (e.g. node.js).\nFor each modeling language or format, you can register a modeling language contribution, which defines how certain model-specific capabilities are implemented, such as persisting models, model-specific operations or APIs that can be invoked for your model, how cross-references are to be resolved within a workspace, validation rules, etc. This gives you full control and customizability in all relevant aspects of the model management for your modeling language, format, or data source, whether it is a JSON file, custom file format, database, or REST service that you want to make available to your clients by integrating them with the Model Hub.\nTo make your life easier, you don\u0026rsquo;t have to implement all of those Model Hub capabilities from scratch for every modeling language. Instead EMF Cloud provides reusable libraries and integration code for third-party components to cover the most common choices and formats, such as JSON files. Also, EMF Cloud contains libraries that make it easy to connect and interact with the Model Hub.\n➡️ Best to get started with the Coffee Editor NG!\nNote on EMF: If you need to support EMF-based models, there is a dedicated Java-based model management component for EMF models. For more information, please head over to the EMF support documentation. For all other use cases, we recommend to use the Model Hub as it provides a more homogeneous developer experience based on Typescript throughout the client and the server.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/overview/","tags":null,"title":"Overview"},{"categories":null,"contents":"We will provide a live demo as well as a link to GitHub as soon as the code is published. In the meantime you can have a look at the Coffee Editor NG architecture!\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/gettingstarted/","tags":null,"title":"Getting Started"},{"categories":null,"contents":"The Coffee Editor NG is a comprehensive example modeling tool based on EMF Cloud technologies and can also act as an architecture blueprint for your custom modeling tool. It thus brings together a set of best practices in architecture and technology selection from both inside and outside of EMF Cloud. This full-featured example tool is written entirely in Typescript and includes a central Model Hub serving a sample modeling language to a variety of editors, such as a diagram editor, a form-based editor, as well as a textual DSL editor.\n The core of the Coffee Editor NG is a model language contribution to the Model Hub. With this language contribution, we register the modeling language alongside the capabilities for handling this modeling language in the Model Hub. The contributed Coffee Editor NG Model is defined with Langium, an open-source language toolkit for textual languages. The Langium-based language is then integrated into the Model Hub API with the generic EMF Cloud Model Hub Langium integration library, so that Model Hub clients can interact with the language on model level and benefit from Langium\u0026rsquo;s efficient persistence, cross-reference management and validation mechanism.\nIn order to also simplify the editing capabilities, including undo and redo, on model level \u0026ndash; rather than on text level \u0026ndash; we use the EMF Cloud Editing Domain, which provies state management, as well as a command API and a command stack for arbitrary JSON models.\nFinally, the Coffee Editor NG language contribution adds a custom coffee-model-specific API to be used by clients in order to provide reusable model queries and manipulation functions to the clients.\nWe will provide a live demo as well as a link to GitHub as soon as the code is published.\nIn the remainder of this documentation, we use the Coffee Editor NG as an example to demonstrate how certain capabilities can be customized, extended or exchanged with other implementations. Please use the table of contents on the left to navigate to the respective topic of interest.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/coffeeeditorng/","tags":null,"title":"Coffee Editor NG"},{"categories":null,"contents":"The EMF Cloud Model Hub is a central model management component that coordinates multiple clients, such as different editors, in their interaction and manipulation with models. The Model Hub not only provides a generic API to access models, but is extensible with respect to different modeling languages. Below we cover not only how clients can access models but also how new modeling languages can be registered.\nInteracting with models An application may contain several model hubs, each associated to its own \u0026ldquo;context\u0026rdquo;. The context is a unique string identifier. If an application requires a single model hub context, it may use a constant string; but it is also possible to use more dynamic values that take the current editor into account (e.g. using a folder path, or an application ID, or any value relevant to the application being developped).\nEach instance of model hub comes with its own set of contributions, services, models and states.\nTo access the model hub for a given context, we use the ModelHubProvider, which is registered as a Theia Extension:\nimport { ModelHubProvider } from \u0026#39;@eclipse-emfcloud/model-service-theia/lib/node/model-hub-provider\u0026#39;; import { ModelHub } from \u0026#39;@eclipse-emfcloud/model-service\u0026#39;; @injectable() class ModelHubExample { @inject(ModelHubProvider) modelHubProvider: ModelHubProvider modelHub: ModelHub; async initializeModelHub() { this.modelHub = await modelHubProvider(\u0026#39;my-application-context\u0026#39;); } } Loading and saving models Loading and saving models can be achieved by calling the corresponding methods on your model hub instance, assuming contributions have been registered that can handled the requested model IDs. Model IDs are string identifiers that represent a model. They are typically URIs, but can be any arbitrary strings, as long as a Persistence Contribution is able to handle them (See Persistence section below).\nconst modelId = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const model: object = await modelHub.getModel(modelId); If you\u0026rsquo;re certain about the type of your model, you can also directly cast it to the necessary type:\nconst modelId = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const model: CoffeeModelRoot = await modelHub.getModel\u0026lt;CoffeeModelRoot\u0026gt;(modelId); Note: If the model is already loaded, the in-memory instance will be immediately returned. Otherwise, the model hub will look for a Persistence Contribution that can handle the requested modelId, and load it before returning it. Since loading may require asynchronous operations, the getModel() method is itself asynchronous.\nAfter applying some changes, you can save your model. For editing the model, the ModelHub uses Commands executed on a CommandStack, identified by a CommandStackId. When using a single model, the commandStackId can be the same value as the modelId. However, since Commands may affect multiple models in some cases, you may want to use a different CommandStackId. When saving this CommandStack, all models that have been modified by a Command executed on this CommandStack will be saved.\n// In this example, we use a single model, so we can use the modelId as the commandStackId. const modelId = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const commandStackId = modelId; modelHub.save(commandStackId); Alternatively, you can save all modified models on all available Command Stacks:\nmodelHub.save(); Resolving references If you have cross references in your model, i.e., pointers to other nodes either within the same model or another model in another document, you need to consider how those references should be represented and how they can be resolve to the referenced element. By default, references are represented as information about a typed and named nodes located to a particular path within a document. The main reasoning for this representation is that it uniquely identifies an element and can be serialized and sent to clients who can later use that information to resolve the actual element. However, not all cross references may be resolvable due to changes in the model or the modeling language used. In such cases, we want to provide as much information as we can to the client by at least giving the text that was used in the model for the reference and a potentially custom error. Treating reference errors as just another case in reference resolutions allows them to be effectively handled by any type of client, no matter the visual representation.\nexport interface NodeInfo { /** URI to the document containing the referenced element. */ $documentUri: string; /** Navigation path inside the document */ $path: string; /** `$type` property value */ $type: string; /** Name of element */ $name: string; /** Generic object properties */ [x: string]: unknown; } export interface ReferenceError { $refText: string; $error: string; } export type ReferenceInfo = NodeReferenceInfo | ReferenceError; While the ModelHub server flattens the reference information to be serializable, on the client side we often want to interact with the actual element that the reference represents instead of always being aware that there is a reference that we need to resolve. To ease that more natural use of an object graph on the client side, we provide a utility function that replaces all unresolved references with reference objects that can query the object using a custom resolution mechanism. In its purest form such a reference object may simply go to the ModelHub server and query the node based on the node info. In more complex or high performance scenarios a different resolution or caching may be introduced.\nexport type Reference\u0026lt;T\u0026gt; = Partial\u0026lt;NodeReferenceInfo\u0026gt; \u0026amp; Partial\u0026lt;ReferenceError\u0026gt; \u0026amp; { element(): Promise\u0026lt;T | undefined\u0026gt;; error(): string | undefined; }; export type ReferenceFactory\u0026lt;T\u0026gt; = (info: ReferenceInfo) =\u0026gt; Reference\u0026lt;T\u0026gt;; export function reviveReferences\u0026lt;T extends object\u0026gt;(obj: T, referenceFactory: ReferenceFactory\u0026lt;T\u0026gt;): T { for (const [key, value] of Object.entries(obj)) { if (isReferenceInfo(value)) { (obj as any)[key] = referenceFactory(value); } else if (value \u0026amp;\u0026amp; typeof value === \u0026#39;object\u0026#39;) { reviveReferences(value, referenceFactory); } } return obj; } Changing models For the sake of isolation, the ModelHub doesn\u0026rsquo;t expose methods to directly edit the models. Instead, Model Contributions are expected to register a Model Service, that will be responsible for handling all edition operations on the models it handles. Any application interesting in editing these models can request the corresponding Model Service, then call any of the exposed API methods to perform edit operations.\nEdit Operations take the form of Commands, that are executed on a CommandStack. A Command may change one or several Models, and supports Undo/Redo operations.\nCommands are usually handled directly by the Model Service, so they will not be visible to the client of the Model Service.\nexport interface CoffeeModelService { getCoffeeModel(modelUri: string): Promise\u0026lt;CoffeeModelRoot | undefined\u0026gt;; unload(modelUri: string): Promise\u0026lt;void\u0026gt;; edit(modelUri: string, patch: Operation[]): Promise\u0026lt;PatchResult\u0026gt;; createNode(modelUri: string, parent: string | Workflow, args: CreateNodeArgs): Promise\u0026lt;PatchResult\u0026gt;; } /** * This constant is used to register the CoffeeModelService and to retrieve * it from the ModelHub. */ export const COFFEE_SERVICE_KEY = \u0026#39;coffeeModelService\u0026#39;; // Access and use the model service const modelService: CoffeeModelService = modelHub.getModelService\u0026lt;CoffeeModelService\u0026gt;(COFFEE_SERVICE_KEY); await modelService.createNode(modelId, \u0026#39;/workflows/0\u0026#39;, { type: \u0026#39;AutomaticTask\u0026#39; }); Validating models Model Contributions may register Validators. These Validators will be invoked whenever the ModelHub is validated:\nconst modelId = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const diagnostic = await modelHub.validateModels(modelId); Several models can be validated at the same time:\nconst modelId1 = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer2000.coffee\u0026#39;; const modelId2 = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const diagnostic = await modelHub.validateModels(modelId1, modelId2); Or you can validate all models currently loaded, by omitting the modelIds argument:\nconst diagnostic = await modelHub.validateModels(); Note: In the latter case, only models currently loaded will be validated. Since the model hub relies on lazy-loading to identify existing models, it may ignore some models present in your workspace, if they have never been explicitly loaded beforehand.\nThe validateModels method will validate all requested models, then return the validation results, in the form of a Diagnostic.\nIf you\u0026rsquo;re only interested in the latest known validation results, but don\u0026rsquo;t want to wait for a full validation cycle, you can use getValidationState instead. This method doesn\u0026rsquo;t trigger any validation, but returns the result from the latest validation:\nconst modelId = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const currentDiagnostic = modelHub.getValidationState(modelId); Contributing modeling languages The Model Hub can handle several aspects for each Modeling Language:\n Persistence (Save/Load) Edition (Via Model Services and Commands) Validation Triggers All of these aspects can be registered using a ModelServiceContribution. The only mandatory aspect is Edition, via a Model Service.\n/** * Our Model Service identifier. Used by clients to retrieve our Model Service. */ export const COFFEE_SERVICE_KEY = \u0026#39;coffeeModelService\u0026#39;; @injectable() export class CoffeeModelServiceContribution extends AbstractModelServiceContribution { private modelService: CoffeeModelService; @postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY }) } getModelService\u0026lt;S\u0026gt;(): S { if (! this.modelService){ this.modelService = new CoffeeModelServiceImpl(); } return this.modelService as unknown as S; } } This minimal example lacks critical capabilities, that are required by most applications: persistence, and access to the Model Manager, in order to execute Commands. Here\u0026rsquo;s a more complete and realistic example:\n/** * Our Model Service identifier. Used by clients to retrieve our Model Service. */ export const COFFEE_SERVICE_KEY = \u0026#39;coffeeModelService\u0026#39;; @injectable() export class CoffeeModelServiceContribution extends AbstractModelServiceContribution { private modelService: CoffeeLanguageModelService; constructor(@inject(CoffeeLanguageModelService) private languageService: CoffeeLanguageModelService){ // Empty constructor } @postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY, persistenceContribution: new CoffeePersistenceContribution(this.languageService) }); } getModelService\u0026lt;S\u0026gt;(): S { return this.modelService as unknown as S; } setModelManager(modelManager: ModelManager): void { super.setModelManager(modelManager); // Forward the model manager to our model service, so it can actually // execute some commands. this.modelService = new CoffeeModelServiceImpl(modelManager, this.languageService); } } class CoffeePersistenceContribution implements ModelPersistenceContribution { constructor(private languageService: CoffeeLanguageModelService) { // Empty } canHandle(modelId: string): Promise\u0026lt;boolean\u0026gt; { // This example handles file URIs with the \u0026#39;.coffee\u0026#39; extension return Promise.resolve(modelId.startsWith(\u0026#39;file:/\u0026#39;) \u0026amp;\u0026amp; modelId.endsWith(\u0026#39;.coffee\u0026#39;)); } async loadModel(modelId: string): Promise\u0026lt;object\u0026gt; { // Load our model from file... } async saveModel(modelId: string, model: object): Promise\u0026lt;boolean\u0026gt; { // Save the new model to file... } } Persistence Persistence is handled by specifying a ModelPersistenceContribution in your ModelServiceContribution.\n@postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY, persistenceContribution: new CoffeePersistenceContribution(this.languageService) }); } The Persistence contribution needs to implement three methods: canHandle(modelId) to indicate which models it supports, load(modelId) and save(modelId, model) for the actual persistence.\nclass CoffeePersistenceContribution implements ModelPersistenceContribution { constructor(private languageService: CoffeeLanguageModelService) { // Empty } canHandle(modelId: string): Promise\u0026lt;boolean\u0026gt; { // This example handles file URIs with the \u0026#39;.coffee\u0026#39; extension return Promise.resolve(modelId.startsWith(\u0026#39;file:/\u0026#39;) \u0026amp;\u0026amp; modelId.endsWith(\u0026#39;.coffee\u0026#39;)); } async loadModel(modelId: string): Promise\u0026lt;object\u0026gt; { // Load our model from file... } async saveModel(modelId: string, model: object): Promise\u0026lt;boolean\u0026gt; { // Save the new model to file... } } Cross References As discussion in the reference resolution section, cross references that stem from your custom Langium-based modeling language need to be serializable so they can be sent to ModelHub clients. Cross references in Langium are regular objects that may contain cycles. Breaking those cycles is the main purpose of the AstLanguageModelConverter, a converter between the Langium-based AST model and the client language model. By default, the ModelHub converter converts the Reference objects from Langium to ReferenceInfo objects that can be serialized and later revived again based on the document location:\nexport class DefaultAstLanguageModelConverter implements AstLanguageModelConverter { ... protected replacer(_source: AstNode, key: string, value: unknown): unknown { ... if (isReference(value)) { return this.replaceReference(value); } return value; } protected replaceReference(value: Reference\u0026lt;AstNode\u0026gt;): client.ReferenceInfo { return value.$nodeDescription \u0026amp;\u0026amp; value.ref ? { $documentUri: getDocument(value.ref).uri.toString(), $name: value.$nodeDescription.name, $path: value.$nodeDescription.path, $type: value.$nodeDescription.type } : { $refText: value.$refText, $error: value.error?.message ?? \u0026#39;Could not resolve reference: \u0026#39; + value.$refText }; } protected reviveNodeReference(container: AstNode, reference: client.NodeReferenceInfo): Reference { const node = this.resolveClientReference(container, reference); return { $refText: reference.$name, $nodeDescription: { documentUri: URI.parse(reference.$documentUri), name: reference.$name, path: reference.$path, type: reference.$type }, $refNode: node?.$cstNode }; } protected resolveClientReference(container: AstNode, reference: client.NodeReferenceInfo): AstNode | undefined { const uri = URI.parse(reference.$documentUri); const root = uri ? this.documents.getOrCreateDocument(uri).parseResult.value : container; return this.getAstNodeLocator(root.$document?.uri)?.getAstNode(root, reference.$path); } } As with all other services, the behavior of this conversion can be adapted by extending or completely replacing the implementation and re-binding it in the respective module.\nEditing Domain TODO\nValidators A ModelServiceContribution can register a ValidationContribution, which will return a list of Validators. Validators will be invoked for all models (including the ones not actually handled by the Model Service Contribution), so they need to implement some kind of Type Guards to decide if they should actually try to validate a model or ignore it.\n@postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY, validationContribution: new CoffeeValidationContribution(this.languageService) }); } A ValidationContribution simply returns a list of Validators:\nclass CoffeeValidationContribution implements ModelValidationContribution { constructor(private languageService: CoffeeLanguageModelService){ // Empty } getValidators(): Validator[] { return [ new WorkflowValidator(), new TaskValidator() ]; } } class WorkflowValidator implements Validator\u0026lt;string\u0026gt; { async validate(modelId: string, model: object): Promise\u0026lt;Diagnostic\u0026gt; { // Start with a model typeguard, as all validators will be invoked // for all models. if (isWorkflow(modelId, model)){ // Check that model is a well-formed Workflow... } else { return ok(); } } } class TaskValidator implements Validator\u0026lt;string\u0026gt; { async validate(modelId: string, model: object): Promise\u0026lt;Diagnostic\u0026gt; { if (isWorkflow(modelId, model)){ const tasks = model.nodes.filter( node =\u0026gt; node.type === \u0026#39;AutomaticTask\u0026#39; || node.type === \u0026#39;ManualTask\u0026#39;); for (const task of tasks){ // Check that each Task is well-formed... } } else { return ok(); } } } Custom APIs Model Service Contributions may (and typically should) expose a public Model Service API, that can be used to interact with the models it provides. A Model Service is identified by a Key, and defined by an Interface. The implementation is then provided by the Model Service Contribution.\nModel Service definition, exposed to all clients that may require it:\n/** * Our custom language-specific Model Service API */ export interface CoffeeModelService { getCoffeeModel(modelUri: string): Promise\u0026lt;CoffeeModelRoot | undefined\u0026gt;; unload(modelUri: string): Promise\u0026lt;void\u0026gt;; edit(modelUri: string, patch: Operation[]): Promise\u0026lt;PatchResult\u0026gt;; createNode(modelUri: string, parent: string | Workflow, args: CreateNodeArgs): Promise\u0026lt;PatchResult\u0026gt;; } /** * This constant is used to register the CoffeeModelService and to retrieve * it from the ModelHub. */ export const COFFEE_SERVICE_KEY = \u0026#39;coffeeModelService\u0026#39;; Model Service contribution, used to register our language (minimal example):\n@injectable() export class CoffeeModelServiceContribution extends AbstractModelServiceContribution { private modelService: CoffeeModelService; @postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY }) } getModelService\u0026lt;S\u0026gt;(): S { return this.modelService as unknown as S; } setModelManager(modelManager: ModelManager): void { super.setModelManager(modelManager); // Forward the model manager to our model service, so it can actually // execute some commands. this.modelService = new CoffeeModelServiceImpl(modelManager); } } The API can then be retrieved and used by any model hub client:\nconst coffeeModelService = modelHub.getModelService\u0026lt;CoffeeModelService\u0026gt;(COFFEE_SERVICE_KEY); const modelUri = \u0026#39;file:///coffee-editor/examples/workspace/superbrewer3000.coffee\u0026#39;; const coffeeModel = await coffeeModelService.getModel(modelUri); const createArgs = { type: \u0026#39;AutomaticTask\u0026#39;, name: \u0026#39;new task\u0026#39; } await coffeeModelService.createNode(modelUri, coffeeModel.workflows[0], createArgs); ","permalink":"https://www.eclipse.dev/emfcloud/documentation/modelhub/","tags":null,"title":"Model Hub"},{"categories":null,"contents":"In Model Hub, languages can be easily defined using Eclipse Langium. Langium is an open source language engineering tool that allows you to declare the syntax of your modeling language in form of an EBNF-like grammar. From that grammar, the Langium CLI can generate a complete TypeScript-based language server, including syntax highlighting, auto-completion, cross references, validation and many other features. Internally, Langium uses a Chevrotain parser extended with an ALL(*) algorithm for unbounded lookahead and is re-using language server infrastructure classes from VS Code. While Langium has more capabilities such as command line interface generation or visualization, we will focus on the aspects that relate to the model hub.\nIn Langium, a grammar is defined in a dedicated .langium file for which a VS Code Extension also provides tooling support. Let\u0026rsquo;s assume we want to have a simple grammar that follows a JSON syntax:\ngrammar CoffeeLanguage // grammar name entry CoffeeModelRoot: // entry rule for the parser, i.e., document root Machine | WorkflowConfig; // sequence of valid tokens → abstract syntax fragment IdentifiableFragment: // re-usable fragment \u0026#39;\u0026#34;id\u0026#34;\u0026#39; \u0026#39;:\u0026#39; id=STRING; TYPE_MACHINE returns string: \u0026#39;\u0026#34;Machine\u0026#34;\u0026#39;; // we use a type to ease distinction for parsing Machine: \u0026#39;{\u0026#39; IdentifiableFragment \u0026#39;,\u0026#39; \u0026#39;\u0026#34;name\u0026#34;\u0026#39; \u0026#39;:\u0026#39; name=STRING // keywords as inline terminals → conrecte syntax \u0026#39;,\u0026#39; \u0026#39;\u0026#34;type\u0026#34;\u0026#39; \u0026#39;:\u0026#39; type=TYPE_MACHINE (\u0026#39;,\u0026#39; \u0026#39;\u0026#34;workflows\u0026#34;\u0026#39; \u0026#39;:\u0026#39; \u0026#39;[\u0026#39; ((workflows+=Workflow) (\u0026#39;,\u0026#39; workflows+=Workflow)*)? \u0026#39;]\u0026#39;)? // optional workflow children \u0026#39;}\u0026#39;; TYPE_WORKFLOW returns string: \u0026#39;\u0026#34;Workflow\u0026#34;\u0026#39;; Workflow: \u0026#39;{\u0026#39; IdentifiableFragment \u0026#39;,\u0026#39; \u0026#39;\u0026#34;name\u0026#34;\u0026#39; \u0026#39;:\u0026#39; name=STRING \u0026#39;,\u0026#39; \u0026#39;\u0026#34;type\u0026#34;\u0026#39; \u0026#39;:\u0026#39; type=TYPE_WORKFLOW \u0026#39;}\u0026#39;; TYPE_WORKFLOW_CONFIG returns string: \u0026#39;\u0026#34;WorkflowConfig\u0026#34;\u0026#39;; WorkflowConfig: \u0026#39;{\u0026#39; \u0026#39;\u0026#34;machine\u0026#34;\u0026#39; \u0026#39;:\u0026#39; machine=[Machine:STRING] // reference a machine defined somewhere else \u0026#39;,\u0026#39; \u0026#39;\u0026#34;workflow\u0026#34;\u0026#39; \u0026#39;:\u0026#39; workflow=[Workflow:STRING]; \u0026#39;,\u0026#39; \u0026#39;\u0026#34;type\u0026#34;\u0026#39; \u0026#39;:\u0026#39; type=TYPE_WORKFLOW_CONFIG \u0026#39;}\u0026#39;; hidden terminal WS: /\\s+/; // Ignore whitespaces during parsing terminal STRING: /\u0026#34;[^\u0026#34;]*\u0026#34;/; // JSON only supports double quoted strings If you try out this grammar on the Langium Playground you\u0026rsquo;ll see what content can and cannot be parsed.\nSpecifically, you would expect to see content like this:\n{ \u0026#34;id\u0026#34;: \u0026#34;my_machine\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Wonderful Machine\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Machine\u0026#34;, \u0026#34;workflows\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;my_workflow\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;Best Workflow\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Workflow\u0026#34; } ] } and\n{ \u0026#34;machine\u0026#34;: \u0026#34;Wonderful Machine\u0026#34;, \u0026#34;workflow\u0026#34;: \u0026#34;Best Workflow\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;WorkflowConfig\u0026#34; } When Langium encounters such content, it will first parse the document, export symbols into the global index, compute the local scope for symbols, link cross references according to the scope, index the resolved cross references and then validate the document.\nHowever, when you input the examples above with our grammar, you will notice that there are a few things that do not work as expected out of the box:\n References are done based on the name property instead of id. The workflow cannot properly be referenced as it is only a child of the workflow. If we write multiple grammars, we may need to repeat our terminal rules, i.e., WS or STRING. Luckily, one of core principles in Langium is customization. The core of that customization principle is a dependency injection framework with a set of default modules and their service implementations that can be overwritten in one central place. Specifically, you define modules where implementations are bound on to a specific property and then create a set of services out of them using the injection mechanism. Each module has a chance to provide new services, i.e., by specifying new properties, or override existing services by using an existing property name but being used later in the chain of modules.\nIn order to solve our first problem, we therefore would need to re-bind the default NameProvider from Langium and ensure that it uses our id property instead of the name attribute:\nexport interface ModelServicesExtension { references: { /** override */ NameProvider: NameProvider; } } export type ModelLanguageServices = LangiumServices \u0026amp; ModelServicesExtension; export function createMyLangModule(context: { shared: ModelLanguagesSharedServices; }): Module\u0026lt;ModelLanguageServices, ModelServicesExtension\u0026gt; { return { references: { NameProvider: () =\u0026gt; new QualifiedIdProvider() } }; } // creating the services from the modules const shared = inject(createDefaultSharedModule(context), …); const myLanguage = inject(createDefaultModule({ shared }), createMyLangModule({ shared }), …); // usage: our NameProvider will be lazily created on access myLanguage.references.NameProvider.getName() In Langium, modules and the generated services can be split into two categories:\n Shared modules and services that mostly relate to the infrastructure such as the language server or the document management and build system. Language-specific modules and services that only relate to a single language such as parsing, auto-completion, or validation. For more details on the dependency injection system and the individual default implementations, we refer to the Langium documentation.\nAs we have seen in this section, defining a grammar in Langium is very straight-forward. However, there are certain services and capabilities that may be very common and can be shared through custom modules without having to re-implement them everytime. Using Model Hub, we therefore offer some modules to support you in the implementation of JSON-based languages and also provide shared services and classes to ease the integration into the overall Model Hub architecture. There is some future work on our road map to support the generation of a JSON-based grammar and overall integration based on a set of Typescript interface that represent the semantic model to make the definition of your modeling language even more efficient. Of course, you are not limited to JSON-based grammars and there are also plans to support a YAML-like syntax. However, if you are very keen on the textual representation of your grammar, you will always be able to simply define your own Langium grammar.\nTo see how we can integrate your modeling language into the Model Hub, see the next section.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/modelinglanguage/","tags":null,"title":"Language Definition"},{"categories":null,"contents":"To extend the Model Hub with your specific language, you need to provide a ModelServiceContribution. A model service contribution can provide a dedicated Model Service API and extend the persistence and validation capabilities of the Model Hub.\nOne of the core principles in our Model Hub architecture is re-use and we therefore aim to re-use as much of the language infrastructure and support that Langium generates for us. This is reflected in several design decisions:\n Since the language server that contains the modules with all the languages already starts in a dedicated process, we will start our own Model Hub server in the same process to ease access.\n We are re-using the dependency injection framework from Langium to bind our own Model Hub-specific services, such as the core model hub implementation, the model manager that holds the model storage, the overall command stack and the model subscriptions or the validation service that ensures that all custom validations are run on each model.\n The bridge that connects the Model Hub world with the Langium world is our generic EMF Cloud Model Hub Langium integration library. That library has two main components:\n The Abstract Syntax Tree (AST) server that serves as a facade to access and update semantic models from the Langium language server as a non-LSP client. It provides a simple open-request-update-save/close lifecycle for documents and their semantic model.\n A converter between the Langium-based AST model and the client language model.\n The biggest difference between those two models is that the client language model needs to be a serializable JSON model as we update the model using JSON patches and intend to send the model to clients which might run in a differenct process. Please note that this JSON model is indepdendent from the serialization format that you define in your grammar from which the AST is derived. This core work of the conversion is the resolution of cycles and the proper representation of cross references so that when we get a language model back from the client we can restore a full Langium-based AST model again, i.e., to have a full bi-directional transformation when it comes to the semantic model.\nUsing the generic AST server and the AST-language model converter we can easily implement a language-specific, typed ModelServiceContribution that the Model Hub can pick up and use as all we need to do is to connect our Langium services with the respective Model Hub services. Any additional functionality that we want to expose for our language can be exported as Model Service API in the contribution and re-used in the model hub or even a dedicated server.\nModel Persistence A model persistence contribution provides language-specific methods to load and store models from and to a persistent storage. Based on the services generated by Langium, we can query the document storage from Langium and using the generic converter ensure that we return a serializable model for the Model Hub. Similarly, we can re-use the generated Langium infrastructure to store the model by converting the language model back to an AST model. Furthermore, we need to ensure that anytime a model is updated on the Langium side we properly update the model on the Model Hub side. We achieve that by installing a listener on the Langium side and using the Model Manager from the Model Hub to execute a PATCH command that updates the model in the Model Hub.\nFor the Coffee Model, the persistence contribution may look something like this:\nclass CoffeePersistence implements ModelPersistenceContribution\u0026lt;string, CoffeeModelRoot\u0026gt; { modelHub: ModelHub; modelManager: ModelManager\u0026lt;string\u0026gt;; constructor(private modelServer: CoffeeModelServer) { } async canHandle(modelId: string): Promise\u0026lt;boolean\u0026gt; { return modelId.endsWith(\u0026#39;.coffee\u0026#39;); } async loadModel(modelId: string): Promise\u0026lt;CoffeeModelRoot\u0026gt; { const model = await this.modelServer.getModel(modelId); if (model === undefined) { throw new Error(\u0026#39;Failed to load model: \u0026#39; + modelId); } this.modelServer.onUpdate(modelId, async newModel =\u0026gt; { try { // update model hub model const currentModel = await this.modelHub.getModel(modelId); const diff = compare(currentModel, newModel); if (diff.length === 0) { return; } const commandStack = this.modelManager.getCommandStack(modelId); const updateCommand = new PatchCommand(\u0026#39;Update Derived Values\u0026#39;, currentModel, diff); commandStack.execute(updateCommand); } catch (error) { console.error(\u0026#39;Failed to synchronize model from CoffeeLanguageService\u0026#39;, error); } }); return model; } async saveModel(modelId: string, model: CoffeeModelRoot): Promise\u0026lt;boolean\u0026gt; { try { await this.modelServer.save(modelId, model); } catch (error) { console.error(\u0026#39;Failed to save model\u0026#39; + modelId, error); return false; } return true; } } Model Validation A model validation contribution can provide a set of validators that work on the semantic model of the Model Hub. As a result, a validator can return a hierarchical diagnotic object that captures the infos, warnings, and errors of a particular part in the model. Using the generic transformations between the Langium and Model Hub space, the main work in this contribution is the translation from Langium\u0026rsquo;s DiagnosticInfo to the Model Hub\u0026rsquo;s more generic Diagnostic. Providing this translation as part of the generic EMF Cloud Model Hub Langium integration library is on the roadmap but can be also extracted from any public example.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/langium/","tags":null,"title":"Langium Integration"},{"categories":null,"contents":"General principles Edition of models is handled by the ModelManager, by executing Commands on one or several CommandStacks. The ModelManager is not directly exposed by the ModelHub API, so it is not (easily) possible to execute arbitrary commands on arbitrary models. Instead, the ModelManager is passed to ModelServiceContributions, which can then use it in their custom ModelService implementation to execute commands. This way, clients do not have to deal with Commands directly, but simply interact with the custom API.\nModel Service implementation As seen in the ModelHub section, the ModelServiceContribution is the entry point for model-specific contributions. Upon initialization of the ModelHub, each contribution will get access to the ModelManager, and should typically forward it to their ModelService implementation.\n@injectable() export class CoffeeModelServiceContribution extends AbstractModelServiceContribution { private modelService: CoffeeModelService; @postConstruct() protected init(): void { this.initialize({ id: COFFEE_SERVICE_KEY }) } getModelService\u0026lt;S\u0026gt;(): S { return this.modelService as unknown as S; } setModelManager(modelManager: ModelManager): void { super.setModelManager(modelManager); // Forward the model manager to our model service, so it can actually // execute some commands. this.modelService = new CoffeeModelServiceImpl(modelManager); } } Then, the ModelService implementation can access the model and execute Commands on the ModelManager:\nexport class CoffeeModelServiceImpl implements CoffeeModelService { constructor(private modelManager: ModelManager\u0026lt;string\u0026gt;) {} // clients could directly access the model from the Model Hub, but // custom ModelService APIs can also provide a convenience method: async getCoffeeModel(modelUri: string): Promise\u0026lt;CoffeeModelRoot | undefined\u0026gt; { const key = getModelKey(modelUri); return this.modelManager.getModel\u0026lt;CoffeeModelRoot\u0026gt;(key); } // [...] async createNode(modelUri: string, parent: string | Workflow, args: CreateNodeArgs): Promise\u0026lt;PatchResult\u0026gt; { const model = await this.getCoffeeModel(modelUri); if (model === undefined) { return { success: false, error: `Failed to edit ${modelUri.toString()}: Model not found` }; } // parent can be either the element itself, or its path. Resolve the // correct element. const parentPath = this.getParentPath(model, parent); const parentElement = getValueByPointer(model, parentPath); if (!isWorkflow(parentElement)) { throw new Error(`Parent element is not a Workflow: ${parentPath}`); } // create the new Node (regular JSON object) const newNode = createNode(args.type, parentElement, args); // create a JSON patch to edit the model const patch: Operation[] = [ { op: \u0026#39;add\u0026#39;, path: `${parentPath}/nodes/-`, value: newNode } ]; // get the command stack. In this case, we don\u0026#39;t have multi-model command stacks, // so just use the modelUri as the command stack id. const stackId = getStackId(modelUri); const stack = this.modelManager.getCommandStack(stackId); // create the Command from the JSON Patch and execute it const command = new PatchCommand(\u0026#39;Create Node\u0026#39;, modelUri, patch); const result = await stack.execute(command); // Return a patch result that indicates success or failure, and applied changes // in case of success. const patchResult = result?.get(command); if (patchResult === undefined) { return { success: false, error: `Failed to edit ${modelUri.toString()}: Model edition failed` }; } return { success: true, patch: patchResult }; } } There are several ways to create patch commands. In the above example, we created the JSON Patch manually, using the JSON pointer path from the parent, and adding a value. However, using a JSON Patch Library (such as fast-json-patch, although any similar library can be used), one could generate the patch instead:\nexport class CoffeeModelServiceImpl implements CoffeeModelService { // [...] async createNode(modelUri: string, parentPath: string): Promise\u0026lt;PatchResult\u0026gt; { const model = await this.getCoffeeModel(modelUri); // [...] // We are not allowed to edit the `model` object directly. Make a copy, // and then we\u0026#39;ll use fast-json-patch to generate a diff-patch. const updatedModel = deepClone(model) as CoffeeModelRoot; const updatedWorkflow = getValueByPointer(updatedModel, parentPath); if (!isWorkflow(parentElement)) { throw new Error(`Parent element is not a Workflow: ${parentPath}`); } // Directly modify the updatedModel object, by adding the new node to it const newNode = createNode(args.type, parentElement, args); updatedWorkflow.nodes.push(newNode); // Generate a patch using fast-json-patch.compare() and create a command const patch: Operation[] = compare(model, updatedModel); const command = new PatchCommand(\u0026#39;Create Node\u0026#39;, modelUri, patch); // [...] // then get the command stack and execute the command as before const result = await stack.execute(command); // [...] } } The ModelManager framework also provides a convenience method to create a PatchCommand that will directly edit the JSON Model, using a Model Updater:\nexport class CoffeeModelServiceImpl implements CoffeeModelService { // [...] async createNode(modelUri: string, parentPath: string): Promise\u0026lt;PatchResult\u0026gt; { const model = await this.getCoffeeModel(modelUri); // create a PatchCommand using a Model Updater. This allows us to directly // edit the model, without having to deal with JSON Patches at all. const command = new PatchCommand(\u0026#39;Create Node\u0026#39;, modelUri, model =\u0026gt; { const workflow = getValueByPointer(model, parentPath); if (!isWorkflow(parentElement)) { throw new Error(`Parent element is not a Workflow: ${parentPath}`); } // Directly modify the updatedModel object, by adding the new node to it. // Inside of the Model Updater, we are allowed to edit the model object // directly - no need to create our own working copy! const newNode = createNode(args.type, parentElement, args); workflow.nodes.push(newNode); }); // [...] // then get the command stack and execute the command as usual const result = await stack.execute(command); // [...] } } Command Stack IDs A Command can be executed on any CommandStack. The Command Stack ID defines how Undo/Redo will behave, especially when a Command affects multiple models. When undoing (or redoing) changes on a Command Stack, the latest command executed on this Stack will be undone, ignoring commands that were executed in different stacks.\nThe most typical use case is to have one command stack per editor, so editors are independent from each other: undoing changes in one editor (command stack) does not affect the state of other editors.\nHowever, when models have cross-references, you may be able to open inter-related models in different editors. In this case, it can be necessary to use a shared command stack for all editors, to ensure all editors work on a consistent state of the model. In that case, you may want to use a Command Stack Identifier that represents the entire set of inter-related models, such as the parent folder path, or project name.\nModel Hub Context A Context is a string identifier that defines the scope of a Model Hub. One model hub instance exists per context. Each Model Hub has its own set of Model Service Contributions, Model Manager, and set of Command Stacks, which are completely independent from other Model Hub instances. A Command executed in a given context cannot be undone from another context.\nIt is up to each application to decide how these contexts are defined. For example, you may decide that you need to isolate changes on a per-project basis, in which case it can be useful to use the Project ID as the Model Hub context. Alternatively, if you\u0026rsquo;re defining several modeling languages without any relationship to each other, you may choose to use the language ID as the context.\nFor simpler applications, using a single, constant context ID is usually recommended.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/editingdomain/","tags":null,"title":"Editing Domain"},{"categories":null,"contents":" The foundational components for setting up tree-based editors in Theia are the Theia Tree Editor and JSON Forms. This guide details how to effectively integrate these components along with the ModelHub to create a functional tree editor connected to the ModelHub. The Theia Tree Editor utilizes the Frontend Model Hub for seamless integration and data management.\nOverview The Theia Tree Editor is a robust framework offering base classes and service definitions. These can be extended and implemented to tailor the editor for specific data requirements. For comprehensive information, refer to the official Theia Tree Editor documentation.\nIntegrating ModelHub Customizing the Theia Tree Editor involves providing data to the tree and setting up a listener for data updates within the editor. ModelHub primarily manages model provisioning. Utilize the FrontendModelHub from the ModelHub package to fetch models. To synchronize changes made in the editor back to ModelHub, leverage a ModelService. This service facilitates data updates to the model stored in ModelHub. Implement these functionalities by injecting FrontendModelHub into the constructor. Use it within the init method to monitor model changes and refresh the tree accordingly. Override the handleFormUpdate method to capture and apply data changes to the ModelHub-tracked model.\nCoffee Editor NG Example CoffeeTreeEditorWidget, an extension of NavigatableTreeEditorWidget (similar to ResourceTreeEditorWidget), integrates ModelHub by adding a listener within the init method:\nthis.modelHub .subscribe(this.getResourceUri()?.toString() ?? \u0026#39;\u0026#39;) .then((subscription) =\u0026gt; { subscription.onModelChanged = (_modelId, model, _delta) =\u0026gt; { this.updateTree(model); }; }); This listener updates the tree editor in real-time with any changes in the data model.\nAdditionally, CoffeeTreeEditorWidget implements the handleFormUpdate method to trigger updates in the model using the ModelService:\nprotected override async handleFormUpdate(data: any, node: TreeEditor.Node): Promise\u0026lt;void\u0026gt; { const result = await this.modelService.edit(this.getResourceUri()?.toString() ?? \u0026#39;\u0026#39;, data); this.updateTree(result.data); } This method ensures that any changes made in the editor are promptly reflected in the model managed by the ModelHub.\nThe ModelService is injected into the CoffeeTreeEditorWidget.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/treeeditor/","tags":null,"title":"Tree-based Editors"},{"categories":null,"contents":" Introduction The foundation of implementing diagram editors is the GLSP framework. This guide will give an introduction to connect GLSP diagram editors with the ModelHub. Specifically, the GLSP diagram editor makes use of the ModelHub and the interfaces of the ModelService like the CoffeeModelServer or the CoffeeModelService for seamless integration and outsourced data management. Furthermore, specific examples of the Coffee Editor NG implementation will be given as a guideline.\nModelHub Integration SourceModelStorage A source model storage handles the persistence of source models, i.e. loading and saving it from/into the model state. A source model is an arbitrary model from which the graph model of the diagram is to be created. A source model loader obtains the information on which source model shall loaded from a RequestModelAction; typically its client options. Once the source model is loaded, a model loader is expected to put the loaded source model into the model state for further processing, such as transforming the loaded model into a graph model (see GModelFactory below). On saveSourceModel(SaveModelAction), the source model storage persists the source model from the model state.\nIn our case all of these responsibilites are outsourced to the ModelHub, namely the model loading, subscribing to model changes, dirty state information and triggering the save of the model.\nGModelFactory A graph model factory produces a graph model from the source model contained in the model state.\nIn this case, we also use the source model from the model state to translate the source model into GModelRoot. For more complex translations, e.g. for creating edges, the CoffeeModelServer is used to resolve references.\nModel Operations To outsource the responsibility of executing operations directly on the model, operations should be forwarded to the CoffeeModelService. The CoffeeModelService offers tailored functions to manipulate the model, e.g. dedicated create or delete functions that take as arguments the specific diagram data like the position on the canvas.\nCoffee Editor NG Example WorkflowModelStorage The WorkflowModelStorage is responsible for loading and saving source models via the ModelHub.\nLoad the source model To load the source model for further usage and transformation into the graphical GModel, we fetch the CoffeeModelRoot from the modelHub as follows:\nthis.modelHub.getModel\u0026lt;CoffeeModelRoot\u0026gt;(modelUri); Detailed implementation async loadSourceModel(action: RequestModelAction): Promise\u0026lt;void\u0026gt; { const modelUri = this.getUri(action); const coffeeModel = await this.modelHub.getModel\u0026lt;CoffeeModelRoot\u0026gt;(modelUri); if (!coffeeModel || !isMachine(coffeeModel) || (isMachine(coffeeModel) \u0026amp;\u0026amp; coffeeModel.workflows.length \u0026lt; 1)) { throw new GLSPServerError(\u0026#39;Expected Coffee Model with at least one workflow\u0026#39;); } this.modelState.setSemanticRoot(modelUri, coffeeModel); this.subscribeToChanges(modelUri); } Subscribe to model changes To subscribe to model changes, modelHub offers a subscribe function:\nthis.subscription = this.modelHub.subscribe(modelUri); this.subscription.onModelChanged = async (modelId: string, newModel: object) =\u0026gt; { // handle model update } Detailed implementation private subscribeToChanges(modelUri: string): void { this.subscription = this.modelHub.subscribe(modelUri); this.subscription.onModelChanged = async (modelId: string, newModel: object) =\u0026gt; { if (!newModel || !isMachine(newModel) || newModel.workflows.length \u0026lt; 1) { throw new GLSPServerError(\u0026#39;Expected Coffee Model with at least one workflow\u0026#39;); } this.modelState.setSemanticRoot(modelId, newModel); const actions = await this.submissionHandler.submitModel(); const dirtyStateAction = actions.find(action =\u0026gt; action.kind === SetDirtyStateAction.KIND); if (dirtyStateAction \u0026amp;\u0026amp; SetDirtyStateAction.is(dirtyStateAction)) { dirtyStateAction.isDirty = this.modelHub.isDirty(this.modelState.semanticUri); } this.actionDispatcher.dispatchAll(actions); }; } Save model To trigger model saving, modelHub offers a save function:\nthis.modelHub.save(modelUri) Detailed implementationasync saveSourceModel(action: SaveModelAction): Promise\u0026lt;void\u0026gt; { const modelUri = action.fileUri ?? this.modelState.semanticUri; if (modelUri) { await this.modelHub.save(modelUri); } } CoffeeGModelFactory To resolve references, such as edge sources or targets, we use the CoffeeModelServer to get the resolved data needed to properly translate the source model into a GModelRoot.\nThis shows one flow from the example source model superbrewer3000.coffee:\n... { \u0026#34;id\u0026#34;: \u0026#34;checkWaterToDecision\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Flow\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;SuperBrewer3000.BrewingFlow.Check Water\u0026#34;, \u0026#34;target\u0026#34;: \u0026#34;SuperBrewer3000.BrewingFlow.MyDecision\u0026#34; }, ... The following snippet shows how to create a GEdge from such a flow object:\n@inject(CoffeeModelServer) protected modelServer: CoffeeModelServer; ... protected async createEdge(edge: Flow): Promise\u0026lt;GEdge\u0026gt; { const [sourceNode, targetNode] = await Promise.all([ this.modelServer.resolve\u0026lt;Node\u0026gt;(edge.source as Required\u0026lt;NodeReferenceInfo\u0026gt;), this.modelServer.resolve\u0026lt;Node\u0026gt;(edge.target as Required\u0026lt;NodeReferenceInfo\u0026gt;) ]); return GEdge.builder() .id(edge.id) .sourceId(sourceNode.id) .targetId(targetNode.id) .build(); } Model Operations via the CoffeeModelService To showcase how to forward model operations to the CoffeeModelService, let\u0026rsquo;s take a look at the CreateWorkflowNodeOperationHandler. This handler is an abstract node creation operation handler that is used to create all possible types of nodes supported by the Coffee diagram editor, e.g. manual tasks or fork nodes.\nWe wrap the model operation in a GModelRecordingCommand - in our case a customized CoffeeModelCommand that aligns the customized types of the GModelState for example. The model operation itself is basically the operation data collection we get from the diagram editor, which we hand over to the CoffeeModelService. The modelService provides a dedicated function to create a new node, expecting the diagram editor\u0026rsquo;s specific data like position and type of node.\noverride createCommand(operation: CreateNodeOperation): MaybePromise\u0026lt;Command | undefined\u0026gt; { return new CoffeeModelCommand( this.modelState, this.serializer, () =\u0026gt; this.createWorkflowNode(operation) ); } async createWorkflowNode(operation: CreateNodeOperation): Promise\u0026lt;void\u0026gt; { const container = this.modelState.semanticRoot; const modelService = this.modelHub.getModelService\u0026lt;CoffeeModelService\u0026gt;(COFFEE_SERVICE_KEY); const modelId = this.modelState.semanticUri; await modelService?.createNode(modelId, container.workflows[0], { posX: this.getLocation(operation)?.x ?? Point.ORIGIN.x, posY: this.getLocation(operation)?.y ?? Point.ORIGIN.y, type: this.getNodeType(operation) }); } Similarly, this is the way to implement model operations, also for other diagram editor use cases like deleting model elements, resizing or repostioning model elements or edit model element labels.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/diagrameditor/","tags":null,"title":"Diagram Editors"},{"categories":null,"contents":"EMF Model Server If you are not creating your modeling tool from scratch, but need to migrate an existing EMF-based model to a modern web-based modeling tool, EMF Cloud provides a dedicated model management component, called EMF Model Server. The EMF Model Server is written in Java and provides access to your EMF models, including manipulation, state management, undo and redo, via a generic REST API, as well as a JSON-RPC channel. This not only opens up accessing EMF models from web-based frontends and other components that aren\u0026rsquo;t written in Java, but also encapsulates your EMF dependency for future migrations.\nAlongside the EMF Model Server, there are also several components that simplify interacting with an EMF Model Server:\n Java-based EMF Model Server Client Typescript-based EMF Model Server Client Eclipse Theia integration of the EMF Model Server Eclipse GLSP Integration EMF Coffee Editor The EMF Coffee Editor provides a comprehensive example modeling tool that combines the EMF Model Server as well as all components mentioned above. The sources of the Coffee Editor are available under an open-source license and thus makes a great blueprint and starting point for your modeling tool project.\n This example provides several features:\n A custom Theia application frame Tree/form-based property editor Diagram editor Textual DSL Model analysis and visualization Code generation Go ahead and try out the coffee editor online!\nGetting Started To get you started quickly, we also provide project templates for the most popular choices including EMF Cloud and GLSP components.\nPlease see the following project-template and follow its README file.\n💾 Model Server ● 🖥️ Java ● 🗂️ EMF ● 🖼️ Theia \u0026ndash; modelserver-glspjava-emf-theia\nIf you need help, please raise a question in the Github discussions or look at our support options.\n","permalink":"https://www.eclipse.dev/emfcloud/documentation/emf/","tags":null,"title":"EMF Support"},{"categories":null,"contents":"Articles The EMF Cloud Model Server How to build a tree editor in Eclipse Theia A web-based modeling tool based on Eclipse Theia, EMF Cloud and GLSP Introducing EMF Cloud Web-based diagram editor features in Eclipse GLSP GLSP: Diagrams in VS Code, Theia, Eclipse and plain HTML Videos EclipseCon 2023: Building cloud-native (modeling) tools EclipseCon 2021: Model validation, diffing and more with EMF Cloud Eclipse Cloud Tool Time June 2021: Web-based tools - built with Eclipse (only) Eclipse Cloud Tool Time March 2021: Web-based modeling tools with EMF Cloud EclipseCon 2020: Ecore tools in the cloud - behind the scenes EclipseCon 2020: Diagram editors in the web with Eclipse GLSP EclipseCon 2019: Lifting the greatness of EMF into the cloud with EMF Cloud ","permalink":"https://www.eclipse.dev/emfcloud/documentation/additionals/","tags":null,"title":"More Articles \u0026 Videos"},{"categories":null,"contents":null,"permalink":"https://www.eclipse.dev/emfcloud/categories/","tags":null,"title":"Categories"},{"categories":null,"contents":null,"permalink":"https://www.eclipse.dev/emfcloud/contact/","tags":null,"title":"Contact"},{"categories":null,"contents":null,"permalink":"https://www.eclipse.dev/emfcloud/documentation/","tags":null,"title":"Documentation"},{"categories":null,"contents":null,"permalink":"https://www.eclipse.dev/emfcloud/","tags":null,"title":"EMF Cloud"},{"categories":null,"contents":"Support for EMF Cloud, to create new EMF Cloud components and for projects adopting EMF Cloud is provided by ","permalink":"https://www.eclipse.dev/emfcloud/support/","tags":null,"title":"Support"},{"categories":null,"contents":null,"permalink":"https://www.eclipse.dev/emfcloud/tags/","tags":null,"title":"Tags"}]