Recent Advances in Web3D Semantic Modeling

Recent Advances in Web3D Semantic Modeling

Jakub Flotyński (Poznań University of Economics and Business, Poland), Athanasios G. Malamos (Hellenic Mediterranean University, Greece), Don Brutzman (Naval Postgraduate School, Monterey, USA), Felix G. Hamza-Lup (Georgia Southern University, USA), Nicholas F. Polys (Virginia Tech, USA), Leslie F. Sikos (Edith Cowan University, Australia) and Krzysztof Walczak (Poznań University of Economics and Business, Poland)
Copyright: © 2020 |Pages: 27
DOI: 10.4018/978-1-5225-5294-9.ch002


The implementation of virtual and augmented reality environments on the web requires integration between 3D technologies and web technologies, which are increasingly focused on collaboration, annotation, and semantics. Thus, combining VR and AR with the semantics arises as a significant trend in the development of the web. The use of the Semantic Web may improve creation, representation, indexing, searching, and processing of 3D web content by linking the content with formal and expressive descriptions of its meaning. Although several semantic approaches have been developed for 3D content, they are not explicitly linked to the available well-established 3D technologies, cover a limited set of 3D components and properties, and do not combine domain-specific and 3D-specific semantics. In this chapter, the authors present the background, concepts, and development of the Semantic Web3D approach. It enables ontology-based representation of 3D content and introduces a novel framework to provide 3D structures in an RDF semantic-friendly format.
Chapter Preview


Immersive virtual reality (VR) and augmented reality (AR) environments are becoming more and more popular in various application domains due to the increasing network bandwidth as well as the availability of affordable advanced presentation and interaction devices, such as headsets and motion tracking systems. One of the most powerful and promising platforms for immersive VR/AR environments is the Web. It offers suitable conditions for collaborative development and use of VR/AR environments, including indexing, searching and processing of interactive 3D content of the environments. Development of web-based VR and AR has been enabled by various 3D formats (e.g., VRML (Web3D Consortium, 1995) and X3D (Web3D Consortium, 2013)), programming libraries (e.g., WebGL (WebGL, 2020) and WebXR (W3C Consortium, 2019)) and game engines (e.g., Unreal (Unreal engine, 2019) and Unity (Unity Technologies, 2019)).

A potential searching procedure may refer to matching using geometrical and structural characteristics of the scenes (Tangelder, J. W. H. et al., 2008)(Attene, M. et al., 2007)(Papaleo, L. at al., 2009). However, files containing complex 3D scenes are difficult to handle due to their enormous size. Thus, shape and structure matching may be inefficient in terms of computational effort and response time (Grana, C. et al., 2006). Alternatively, the use of textual information in order to retrieve semantic references will fail in most cases. The annotation performed by authors depends on subjective factors, such as language, culture, etc. Therefore, results based on textual matching are even more degraded than the structural matching (Min, P., 2004) The need for a reliable searching mechanism led to the advent of the Semantic Web (W3C Consortium, 2014), which is currently a prominent trend in the evolution of the Web. It transforms the Web into a network that links structured content with formal and expressive semantic descriptions. Semantic descriptions are enabled by structured data representation standards (in particular, the Resource Description Framework, RDF (W3C Consortium, 2014), and by ontologies, which are explicit specifications of a conceptualization (Tom Gruber, 2009), i.e. knowledge organization systems that provide a formal conceptualization of the intended semantics of a knowledge domain or common sense human knowledge. Ontologies consist of statements that describe terminology (conceptualization)—particular classes and properties of objects. Ontologies are intended to be understandable to humans and processable by computers (Berners-Lee, T., 2001).

In the 3D/VR/AR domain, ontologies can be used to specify data formats and schemes with comprehensive properties and relationships between data elements. In turn, collections of individuals of a knowledge domain, including their properties and relationships between them are referred to as knowledge bases. Knowledge bases consist of statements about particular objects using classes and properties that have been defined in ontologies. Hence, in the 3D/VR/AR domain, knowledge bases can be used to represent individual 3D scenes and objects.

The Resource Description Framework Schema (RDFS) and the Web Ontology Language (OWL) (W3C Consortium, 2012) are languages for building statements in RDF-based ontologies and knowledge bases. In turn, SPARQL (W3C Consortium, 2013) is the most widely used query language to RDF-based ontologies and knowledge bases. In contrast to other techniques of content representation, ontologies and knowledge bases enable reasoning over the content. Reasoning leads to inferred tacit (implicit) statements on the basis of statements explicitly specified by the authors. These, in turn, represent implicit content properties.

The overall knowledge obtained from reasoning can be subject to semantic queries. For instance, connections between 3D objects that form hierarchies in scenes can be subject to reasoning and querying about the scenes’ complexity. Similarly, position and orientation interpolators in a 3D scene can be subject to reasoning and querying about the motion categories of objects (linear, curved, rotary, etc.). A semantically represented 3D piston engine can be subject to reasoning to infer and query about its type on the basis of the cylinder arrangement (in-line, multi-row, star or reciprocating).

Complete Chapter List

Search this Book: