Exploring OWL Ontologies Visually: A Paradigm Shift in Understanding ??
Nicolas Figay
Model Manager | Enterprise Architecture & ArchiMate Advocate | Expert in MBSE, PLM, STEP Standards & Ontologies | Open Source Innovator(ArchiCG)
Author: Nicolas Figay
Status: DraftAuthor: Nicolas Figay
Status: Draft
Last update: 2025-02-08
This article was initiated due to the success of the following post
A post being not enough for addressing the topic, here is the article developing the subject deeper.
Introduction
When diving into the world of Web Ontologies (OWL), it's easy to get caught up in thinking about data structures and classes the same way we think about tables or objects in traditional databases. But OWL ontologies operate on different principles, and visualizing them can help you grasp their true nature in a more intuitive way.
?? Properties Are Not Just Data Attributes
One of the biggest misconceptions is that properties in OWL are akin to attributes in traditional data structures. But here's the catch: Properties in OWL can be defined independently and aren't always tied to specific classes or entities. This flexibility allows properties to be more versatile, able to apply across different contexts or even to complex class expressions. This is fundamentally different from attributes, which are always defined within the context of a specific class or entity.
?? Understanding the Ontology Structure
Visualizing OWL ontologies helps reveal their structure — things, properties, and classes. While data structures often prioritize how entities are grouped together, OWL ontologies start with the entities themselves (individuals), then define the relationships (properties), and only then categorize these relationships into classes. This approach reflects a more dynamic, flexible understanding of the world, rather than one rigidly fixed into predefined structures.
?? Disrupting the "Class First" Mindset
In traditional data structures, we often think in terms of attributes attached to objects or entities. In OWL ontologies, properties serve as standalone concepts that can connect things in flexible ways, whether or not a class or entity exists. This is one of the key principles that differentiates OWL from traditional data models. It's about understanding relationships between concepts, not just about storing data in hierarchical structures.
?? Why Visual Exploration Matters:
Exploring OWL ontologies visually helps dispel misunderstandings and encourages a deeper understanding of the ontology’s principles. By representing properties as independent elements, we can see the true flexibility of how they connect individuals and classes in ways that traditional data structures can't match.
About the ontology Web Language and the special role of Object Properties
The difference between OWL ontologies and traditional data modeling concerning domain and range revolves around their philosophical approach, intended use, and the conceptual role of relationships:
1. Domain and Range in OWL Ontologies
Declarative semantics: In OWL, domain and range are used to make logical assertions about the properties of a relation. For example:
Relations as first-class citizens: In OWL, relations (or properties) are first-class citizens:
Focus on semantics: The primary goal is to define and infer meaning. The domain and range act as constraints on the ontology but are fundamentally logical axioms, not strict validation rules.
2. Domain and Range in Data Modeling
Operational constraints: In traditional data modeling (e.g., databases, ER models), domain and range are used as type constraints:
Relations are not first-class citizens: In traditional data modeling, relations are not independent entities:
Focus on structure and validation: The primary goal is to enforce data integrity and structure, not to infer meaning. Domain and range act as strict constraints for the data.
3. Do Relations Become First-Class Citizens?
OWL Ontologies: Yes, relations are first-class citizens:
Data Modeling: No, relations are secondary to entities:
4. Key Philosophical Differences
The treatment of domain and range highlights a fundamental difference between knowledge representation (ontologies) and data modeling:
* OWL focuses on semantics and reasoning, where relationships are first-class entities.
* Data modeling emphasizes validation and operational structure, treating relationships as auxiliary constructs that link primary entities.
Visualizing the properties of an ontology
Legacy ways for visualizing Properties
A first idea is to rely on the symbols used in Protégé for Classes, Object Properties and Named Individuals: respectively orange circle, blue rectangle and purple diamond.
Now considering graphs as displayed with ontograph in protégé, object properties are usually displayed as edges between classes, which makes their representation existence of classes belonging to their domain and range dependent.
It can also be a problem when having many classes in domain or range of an object property, as illustrated by the next figure: you have for the Object Property property : PropertyWithManyClassesInDomainAndRange as many arcs as domain or range statements.
In facts, the property itself is not actually visually represented, arcs displayed here are representing each one domain o range statement.
Proposals to actually visualize domain and range of properties
The idea is the creation for each object property a node for the domain and a node for the range, and a node for each property (propertyNode). The domain and range nodes can be propertyNode external (left graph) or internal, i.e. contained (right graph). This way, we can "assign" one or more classes to domain and range of each objectProperty.
What is you preferred one?
In a viewer, we can proposed a feature allowing to switch from one to the other.
What other features can we imagine?
Proposal to visualize chain properties
Chain properties in OWL ontologies are a mechanism that allows the definition of a new property based on a sequence of other properties.
For instance, if a property R1R_1R1 links AAA to BBB, and another property R2R_2R2 links BBB to CCC, a chain property R3R_3R3 can be defined to link AAA directly to CCC via the combination of R1R_1R1 and R2R_2R2.
This is achieved using OWL's property chain axioms, enabling reasoning engines to infer indirect relationships and enhance semantic expressiveness.
Chain properties extend the capabilities of standard properties by supporting transitive-like behaviors over custom-defined sequences, but they require reasoning support to compute and visualize their inferred relationships.
While modern OWL tools like Protégé or reasoning engines handle chain properties effectively, some legacy ontology viewers may struggle to visualize these constructs explicitly.
Such tools often focus on direct properties and might not display the inferred connections resulting from property chains, making it necessary to rely on reasoners for complete visualization.
Most of the time, chain properties are defined through the attachment to an object property, e.g. with Protégé.
With ontograph, we can eventually visualize hasDocument, but not the definition of the chain itself.
How could we capture it?
The proposal here is to create the a compound node PropertyChain (with a number as several ones can be created for a single Object Property), and to create inside a node per chain element, numbered with a sequence number (the order in the chain), and linked to the property it is related to. Let's note that an Object Property can contribute to many chains.The PropertyChain node is the parent of the chainMember nodes and can be collapsed or extended. Let's note that with the various syntaxes allowing to serialize OWL, the sequencing of the chain member is defined by the order of property declarations in the serialized files.
So an exemple concerning how it could look like is given by the following figure.
About equivalence
Equivalence in OWL ?? is all about defining when two things mean the same in your ontology:
?? Why it matters
When mapping or merging ontologies, equivalence is your friend. It helps unify concepts across datasets without redundancy. In well-designed ontologies, you won't find redundant concepts unless there’s a strong, justified need—they aim for clarity and precision.
Reasoners ?? love equivalence too! They’ll automatically infer facts, like Human and HomoSapiens being interchangeable, simplifying complex mappings. In a world of data silos, equivalence is a key step toward true interoperability. ?
Here is a first proposal for representing it visually: the equivalence symbol ≡ on and edge linking two equivalent classes or properties.
However this proposal doesn't take into account the fact that equivalence is not necessarily between two elements only. Let's come back to equivalence.
Equivalence and equivalence groups
Equivalence groups play a crucial role in semantic modeling and ontologies, particularly when we need to express that different classes or properties in an ontology are essentially the same, even if they are defined in different contexts or locations. The idea behind equivalence groups is that we treat a set of entities as interchangeable, despite them potentially having different identifiers (IRIs).
The challenge in visualizing equivalence groups arises from the need to represent these groups in a way that shows both the individual elements and their shared identity, while also maintaining clarity in a complex graph structure. For this reason, equivalence groups are represented as a set of nodes that belong to a specific equivalence group identifier, often a unique ID such as a UUID. This ID connects all members of the equivalence group, enabling us to track their relationship in a coherent and organized manner.
The rational behind managing equivalence groups in this way is to ensure that we do not mix entities from different groups and maintain clear boundaries between them. While the entities in the equivalence group can be viewed as distinct, they share a common identity within the context of their equivalence. This distinction is important in any reasoning process, as it helps avoid confusion when elements of the same equivalence group are involved in different parts of an ontology.
In a visualization context, each equivalence group can be represented as a set of nodes linked to one another. These nodes carry an attribute – the equivalence group ID – which helps establish the group’s membership. When visualizing, these nodes might appear connected in different ways. They could be directly connected or clustered together to show that they belong to the same equivalence group. This allows users to intuitively recognize which nodes are part of the same semantic class or property group.
For a more refined visualization, one might use nested nodes to show hierarchical relationships or clusters within a given equivalence group. In some cases, nodes could be visually grouped or color-coded based on their equivalence group, offering a quick visual cue to users, helping them understand the logical grouping and connections without needing to delve into the underlying data structure.
The proposed approach for handling equivalence groups with graph visualisation solutions includes assigning unique identifiers to each equivalence group. This ensures that the representation remains consistent across different contexts. Each node representing a class or property will carry an attribute linking it to its equivalence group, making it possible to visualize the graph with these equivalences in mind. This structure can be further enhanced by the use of dynamic graph layouts, where nodes are organized based on their group memberships, or even interactive elements allowing users to filter or focus on specific equivalence groups.
This visualization approach not only enhances clarity in the representation of equivalence but also aligns with the fundamental goal of ontological representation – to enable users to understand relationships between entities, regardless of their specific context or origin, in a seamless and intuitive manner.
An approach in terms of implementation (here in javascript) could be the following.
1. Handling the equivalence group as a JavaScript Set:
let equivalenceSet = new Set();
equivalenceSet.add("https://example.org/class1");
equivalenceSet.add("https://example.org/class2");
equivalenceSet.add("https://example.org/class3");
2. Creating an identifier for the equivalence group:
Since equivalence groups are defined by a set of IRIs, you need to generate a unique identifier (e.g., UUID) to represent this equivalence group.
The generated identifier will serve as an attribute for the nodes that represent the classes or properties in this group.
You can use a library like uuid.js to generate a UUID.
let uuid = require('uuid');
let equivalenceGroupId = uuid.v4(); // Generates a unique ID for the equivalence group
3. Algorithmically:
Store the equivalence group in a set. This set will contain all the IRIs of the elements that belong to the group.
Each equivalence group is assigned a unique identifier (UUID), which will be added as an attribute for the nodes representing the classes or properties in the graph.
4. Ensuring Uniqueness of Attributes When Parsing RDF/XML:
When parsing the RDF/XML file and creating the graph, you need to ensure that each node (representing a class or property) has a unique equivalence group identifier as an attribute.
To achieve this, as you parse the RDF/XML file:Check whether a node belongs to an equivalence group.If the node is part of a group, assign the pre-existing equivalence group identifier.If it's a new equivalence group, generate a new identifier (UUID) and associate it with the node.
Example of ensuring uniqueness in a graph:
// Assume you are parsing RDF/XML and constructing the graph in Cytoscape.js
let cytoscapeGraph = cytoscape();
let equivalenceGroups = {};
function handleNode(node) {
let equivalenceKey = getEquivalenceGroupKey(node); // Define a function to determine the equivalence group
if (!equivalenceGroups[equivalenceKey]) {
equivalenceGroups[equivalenceKey] = uuid.v4(); // Generate new UUID if the group doesn't exist
}
node.data('equivalenceGroupId', equivalenceGroups[equivalenceKey]);
cytoscapeGraph.add({
group: 'nodes',
data: {
id: node.id,
label: node.label,
equivalenceGroupId: equivalenceGroups[equivalenceKey]
}
});
}
// Example node
let node = { id: 'class1', label: 'Class 1' };
handleNode(node);
5. Ensure that nodes in the graph have unique equivalence group IDs:
6. Representation in the Cytoscape Graph:
By following this approach, you ensure that equivalence groups are correctly represented in the Cytoscape graph, and each node has a unique identifier for the equivalence group it belongs to.
Here is the visualisation proposal resulting from applying this approach and reflecting the visual choices made, for object properties.
Additionally, the set of equivalent elements could be added as label, with a risk to have a too long label. This point is open to exploration.
About inverse relationships
In the world of semantic web ?? and ontology modeling, inverse properties are a powerful tool ??.
Defined using owl:inverseOf, they allow us to express bidirectional relationships between concepts, making reasoning more efficient and models more intuitive.
For example, imagine the properties "hasParent" ?????? and "isParentOf" ??????. These two properties are inverses of each other, meaning that if "A hasParent B", we can infer that "B isParentOf A" without explicitly defining both relations.
The real benefit?
We avoid redundancy ??, reduce complexity, and increase logical inference capabilities ??.
By linking properties together through inverse relations, we enhance the flexibility of data structures and improve knowledge representation.Inverse properties are key to creating robust ontologies ?? where relationships are dynamic, easily navigable, and more logically connected.
How to visually represent it? An edge with a symbol ? for inverse can be used the way illustrated by the following figure.
Property characteristics and symbols to attach
Usually no symbols exist fro a set theory point of view, and some are used, but in general rarely, from a logic point of view.
In order to define visually those concepts, the selected one are in the following array those indicated for preferred symbols.
They should allow to indicate, when tagging a property, the characteristics of object properties. The is symbols for all those proposed by Protégé, as shown in the next figure.
Tagging with those symbols (corresponding to checking boxes in Protégé) could be done with icons or for a more synthetic approach by prefixing the label with symbols of the characteristic of the object property.
E.g. ?△Sibling for Sibling object property, which is indicate to be reflexive and transitive.
An illustration of the prefixing labels with the symbols is the following.
领英推荐
Going further with symbols
Let's note that usage of symbols could be generalized, with the following table completing the previous one with complementary concepts and symbols relevant to consider with OWL ontologies based on Descriptive Logic.
Capturing complex definitions in OWL ontologies visually
How do we effectively represent complex definitions in OWL ontologies visually? This question is central to making ontologies more accessible, especially for those new to the field or collaborating across disciplines. One intriguing case is the concept of equivalence.
In OWL, equivalence defines when two concepts or entities are precisely the same in a given logical context. For instance, the concept of "Mother" can be expressed as being equivalent to "a Female who has at least one child." In a logical expression, this is written as:
Mother ≡ Female ? (hasChild ? Person)
While text and symbols are precise, they often fail to convey the intuitive understanding needed by a diverse audience.
Visual representations, like graphs, offer a promising alternative.
Yet, challenges abound. Should we rely on logical symbols such as ? (existential quantifier) and ? (intersection), or seek more intuitive metaphors and icons?
How can we ensure that equivalence, a symmetric relationship, is easily distinguished from hierarchical or asymmetric ones?
For "Mother ≡ Female ? (hasChild ? Person)," a graph might link "Mother" to "Female" and "hasChild some Person" with bidirectional arrows or specialized shapes to signify equivalence.
But does this truly help users grasp the concept, or does it add another layer of abstraction?
This raises broader questions about the visual language of ontologies. Should we prioritize universal logic symbols, familiar to mathematicians and logicians, or lean towards visually engaging icons tailored to a wider audience?
How do we balance aesthetic appeal with computational efficiency, especially for large graphs?
How would you approach visualizing "Mother ≡ Female ? (hasChild ? Person)" in a way that’s both precise and accessible?
At the current stage, I just have a node (n24) created with my RDF library, which is now to be visually detailed. But how?
???? OWL Visualization: Beyond Object Properties
When we think of OWL (Web Ontology Language), we often visualize object properties as the primary relationships connecting entities in our ontologies.
But what if we expanded that view?
??In a visual graph-based cartography, we can also bring datatype properties and their associated datatypes into the spotlight, creating a more comprehensive and enriched ontology landscape.
????By integrating datatype properties into our graph visualization, we make the relationships between resources and data values more explicit.
This approach enhances clarity, enabling us to explore how datatypes like xsd:string, xsd:int, and xsd:decimal are related to specific classes or entities.
????Just like object properties (????), datatype properties and their range datatypes are visualized seamlessly in the graph.
We can assign meaningful classes and datatypes to nodes, offering a detailed view of how data types are applied across an ontology.
The IRIs of these datatypes can be visualized as nodes, ensuring we have a complete picture of both the relationships and the data structures.
????This visualization approach empowers ontologists and developers to work with more sophisticated, compound graphs, giving them a clearer overview of the underlying data relationships and type systems.
It's a visual map that goes beyond just showing the entities to also represent how data values and types fit into the bigger picture.
????Next time you’re looking to explore your OWL data, consider visualizing both your object properties and datatype properties in a single, cohesive, interactive graph.
It’s not just about relationships; it’s about understanding the full structure of your data. ????
A first proposal is the following one. Let's note I used for my proposal the same symbols than those used with Protégé: Green rectangle for Data Object Properties and red circles for datatypes.
Aligning visual conventions with those of Protégé
??When designing a visual language for OWL, a key challenge is ensuring clarity, usability, and alignment with the mental models of those working with ontologies.
Rather than reinventing the wheel, why not leverage an existing, well-adopted standard?
Protégé, widely regarded as the de facto OWL editor, already provides an intuitive color and symbol scheme for core OWL concepts:
?? Classes, Object Properties, Named Individuals, Datatypes, Annotations, and more—all visually distinguished.
By adopting these visual cues, a graphical OWL representation can feel more familiar, reducing cognitive load and improving adoption among users who already interact with OWL in Protégé.T
his is exactly the approach I am taking with the OWL Visual Viewer I’m working on.
By leveraging Protégé’s conventions, the goal is to provide:
? More intuitive ontology visualization
? Easier onboarding for new users
? Consistent semantic interpretation across tools
Let's note I used here exactly the same colors than in Protégé for my viewer.
?? Visualizing OWL Ontologies with Inferred Links: A New Perspective
Imagine opening your OWL ontology file and seeing not just what’s asserted ??, but also the inferred connections ?? generated by reasoning engines.
?? This approach can bring clarity to the intricate logic captured within ontologies and the added value of reasoning mechanisms.
Here’s the vision:
?? Load your OWL ontology.
?? Explore its structure and content interactively.
?? Export inferred links from Protégé and import them into the visualization tool.
?? The result? A graph that not only represents the original ontology but highlights inferred elements in a distinct, visually appealing way.
How do we differentiate?
Asserted elements are styled boldly and clearly, reflecting their foundational status.Inferred links are visually distinct, perhaps through lighter lines, dashed strokes, or unique colors, making it easy to grasp their role at a glance.
Why is this important?
This feature isn’t yet supported by popular tools like OntoGraph but could revolutionize how we communicate about ontologies. By visualizing the interplay between assertions and inferences, we can better demonstrate the power of reasoning engines ?? and the value of well-structured ontologies ??.
Providing just visualization features for a viewer means that there will not be included inside the reasoner inside.
Let's consider then what could be the process for feeding the viewer:
1- to import the ontology, e.g. produced with Protégé
2- launch the reasoning engined and exporting the inferred part of the ontology (supported by Protégé)
3- import the inferred ontology in the viewer: all what is imported will be typed as inferred and be displayed differently in the viewer. Dashed links is one simple way. But in some cases, we can imagine visualisation of the information not only with links, but with some tags (e.g. a typing captured with a symbol tagging a node). In this case, we can use some visual ways to differentiate symbols for what is explicit and what is inferred (level of transparency, background, special border...). The idea is to identify what looks the most appropriate for a majority of users.
The following figure shows how it could look like to propose the feature, through menus, for importing the inferred part of the ontology (realization not yet done but will come soon).
A future extension could be a direct connection to a reasoner which will update dynamically the inferred linked. But we are addressing here something which is not directly related to advanced visualisation and definition of a notation for ontologies in OWL.
?? OWL: A Solution to Synonymy, Polysemy, and Syntactic Ambiguity
??In the complex world of data and knowledge, OWL (Web Ontology Language) plays a crucial role in ensuring precise and unambiguous semantic representation.
???? Synonymy: Through labels, OWL allows different terms to be associated with the same concept, thus unifying the vocabulary in a multi-source environment.
?? Polysemy: Each meaning of a term is handled distinctly by creating separate classes for each interpretation, enabling a more nuanced understanding of terms and their context.
?? Syntactic Ambiguity: OWL clarifies the relationships between concepts through formal properties, eliminating ambiguities related to the structure of language.
?? With IRI (Internationalized Resource Identifiers), OWL ensures unique and unambiguous identification of concepts, which is crucial for the integrity of information systems.
The topic is addressed from a semantic point of view in the following article:
?? By combining labels, annotations, and formal definitions, OWL provides a rich representation of knowledge that enhances interoperability and the management of complex data.
OWL ontologies not only structure information but also create a true semantic bridge between systems and users, allowing better utilization of data across various technologies.
Going further, we can consider that we have a part of the ontologies for the people, and another one for the machines, where identifiers (IRI) should be only identifiers and bringing no meaning. The parts dedicated to the people will be the used symbols, icons or text provided by labels, and which can depend on the context: who visualize and what will help him to reflect the information he needs when looking at the visual representation.
?? Exploring OWL Ontologies in a Whole New Way
Ontologies are more than just abstract models. They are living ecosystems where every concept, relationship, and axiom contributes to a structured vision of knowledge. But when opening an OWL file, we often see only part of the story...Our OWL Viewer changes that. ??Not only does it provide an interactive semantic cartography, revealing the structure of an ontology as a graph, but it also unveils the richness of embedded RDF triples. In an RDF/XML file, much of the knowledge remains hidden, encapsulated in imports and implicit metadata. We make it visible. ??Why does this matter??? Better understanding of ontological dependencies – see not just an ontology’s internal structure but also the concepts and assertions inherited from other RDF sources.?? Increased transparency – detect connections between ontologies and avoid hidden "black box" knowledge.?? A boost for interoperability – by clearly displaying what is imported, integration with other systems and standards becomes easier.The goal? To provide ontology experts, data architects, and semantic web enthusiasts with a tool that meets the challenges of interoperability and knowledge modeling. ??What do you think of this approach? Have you ever faced the limitations of classic OWL visualizations? ????
?? OWL Individuals: When One Entity Wears Many Hats
????In an ontology, individuals are more than just data points.
They can have multiple types ???, multiple names ??, and multiple identifiers ??.
But how do you represent such complexity without turning a graph into a tangled web? ??
Imagine an individual that belongs to several classes C1, C2, C3, and has multiple labels Name1, Name2, Name3. If we na?vely display each connection, the graph explodes into a mess of redundant nodes and edges! ???????
A Smarter Visualization Approach
??Inspired by UML2 notation, we can compactly represent individuals like this:?? Name1, Name2, Name3 : C1, C2, C3We can also "tag" the node with multiple icons if some available associated to the types.Instead of duplicating entities or overloading the visual space, this notation preserves clarity while keeping the graph readable.
?But what if different users need different perspectives?
??What if some prefer explicit edges for each type while others want a more compact view?
?? Adaptive Graph Views
???? Switchable modes allow users to toggle between representations:
?? Expanded mode – every type, every relation explicitly drawn.
?? Compact mode – individuals grouped by name & type, reducing visual noise.
?? Hybrid mode – critical information remains explicit, secondary details collapse dynamically.
By giving users the power to choose how they see the data, we don’t just show knowledge—we make it workable, scalable, and intuitive.
???How do you currently handle the visual explosion of ontological data?
?? How to deal with External Referenced Resources!
??In the world of ontologies, the value often lies not just in what is explicitly defined within the ontology itself, but in what it references externally.
These references—whether they point to external resources, other ontologies, or data systems—expand the model's utility, enabling interoperability across systems.
But how do we visualize and interact with these external references?
??With the Ontology Viewer, we can now dynamically display these external resources!
When an external reference (such as an IRI path) is encountered, the viewer automatically creates a compound node that links directly to the external resource.
This compound node is generated on the fly, and the IRI path is displayed, giving a clear visual cue of where the resource is located and how it relates to the ontology.
The magic happens as we seamlessly integrate external resources into the ontology visualization, making connections that might have previously been invisible.
This approach brings both semantic depth and visual clarity to the representation, making it easier to understand complex interconnections.
? Imagine navigating a graph where you can follow links to external knowledge, accessing both the defined and the referenced in a single visual context!Now, from a notation point of view, accurate symbols are to be chosen for those nodes, indicating if an ontology, another kind of RDF resource, or unknown/uncertain.
Here’s how these symbols can be used in a graphical ontology visualization:
?? Imported OWL Ontologies → Displayed as Compound Nodes (Encapsulated and integrated).
?? Referenced RDF Resources → Linked with a simple edge (No reasoning effect).
??? Uncertain RDF/OWL References → Dashed Node with an Ambiguous Indicator (Potential OWLization, but unknown reasoning impact).
These symbols (or other symbols to become part of the proposed notation for OWL) should decorate graph visualisation as the one displayed in the next figure, which results from an ontology with imports and references to the outside of the ontology definition.
Visualisation of OWL extended with complementary OWL technologies
Several articles have been written concerning comparison of OWL with alternative technologies, being in the family of W3C Semantic Web or Linkded data, or in the graph databases technologies dealing with LPGs.
The following article brings the question of the visualization integrating metadata introduced by RDF*. While parsing of files should extend RDF* parsing and in the future RDF 1.2, the question also arise in terms of visualisation on how to represent them and with which symbol.
The next article describes the actual differences between ontologies and graph databases.
As the displayed data can be heterogeneous in terms of formalism, i.e. OWL, RDFS or RDF*, it could be accurate to reflect it visually and extend the proposed notation to non OWL features.
This other article point out global management of ontologies, and may require to capture the context of the ontologies. Here we can envisage combining ArchiMate visualisation and proposed notation in order to "project" ontological artefacts and capabilities within organisations. It is one of the target of ArchiCG initiative to support such needs.
This other article explains RDF and different extensions, some for semantic with OWL, others more related to distributed graphs as distributed web database. Should the proposed notation also support description of such a distribution?
So for all these various needs, here are some of the notation extension proposed with the viewer.
TBD
Founder @ The Cyber Boardroom, Chief Scientist @ Glasswall, vCISO, vCTO and GenAI expert
4 周Thanks for sharing, I did a lot of similar visualisations using python and plantuml, which I will try to document and publish
Model Manager | Enterprise Architecture & ArchiMate Advocate | Expert in MBSE, PLM, STEP Standards & Ontologies | Open Source Innovator(ArchiCG)
1 个月David Tchoffa
Model Manager | Enterprise Architecture & ArchiMate Advocate | Expert in MBSE, PLM, STEP Standards & Ontologies | Open Source Innovator(ArchiCG)
1 个月The used viewer will be integrated to ArchiCG initiative, as a way to contextualized ontologies representation artefacts within enterprise context and serve as a basis for advanced innovative features for relying ontology engineering and enterprise architecture. Let's follow. https://www.dhirubhai.net/posts/nfigay_the-journey-of-archicg-a-tool-for-semantic-activity-7269965887204466688-PZfs?utm_source=share&utm_medium=member_desktop
Model Manager | Enterprise Architecture & ArchiMate Advocate | Expert in MBSE, PLM, STEP Standards & Ontologies | Open Source Innovator(ArchiCG)
1 个月If the post is not reacheable in the core text of the post, here is the link