Preprint
Article

Collaborative Intelligence in a Decentralized Environment CIDE

Altmetrics

Downloads

92

Views

50

Comments

0

Submitted:

13 November 2023

Posted:

14 November 2023

You are already at the latest version

Alerts
Abstract
Numerous companies frequently engage in collaborative cooperation to exchange intelligence rather than resorting to competing strategies to achieve higher levels of intelligence. \textit{Intelligence} is essential in determining \textit{collaborative intelligence} . Intelligence can be discerned by a range of behaviors and actions that go beyond the mere individual observable activity. Collaborative intelligence is often defined as the combined actions of individuals (such as humans and machines) working towards a common goal. It encompasses more than just behaviour and includes other crucial elements such as the sharing of intelligence. The ontological view concerns the system's understanding and representation of information, data, and the fundamental reality it aims to capture and manage. This paper aims to expand upon our previous research on Intelligence in decentralized environments. The methodology utilized in this paper involves employing an ontological view to derive intelligence, encompassing its various forms, such as data, information, and knowledge. The paper explores the fundamental significance of intelligence and semantic integration in supporting the collaborative intelligence framework. Semantic integration is crucial to establishing a shared understanding of individuals' data, information, and concepts. It is fundamental in facilitating effective communication, intelligence sharing, and decision-making within a collaborative environment.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Organizations that rely on intelligence may demonstrate varying competitive and collaborative tendencies. The propensity of an organization to exhibit competitive or collaborative behaviour depends on various factors, including its strategic objectives, industry dynamics, cultural norms, and the unique use of information technology within its operational framework. Although collaboration and intelligence are crucial components of collaborative intelligence, most of the study has focused primarily on the concept of collaboration. These researchers have neglected or considered intelligence’s nature as observable behavior. In a dynamic and decentralized computing environment, each entity has many forms of intelligence (data, information, and knowledge) and a degree of autonomy. Many difficulties in private or public organizations necessitate the collaboration of multiple entities (information systems) to reach a resolution that maximizes the general purpose or utility while balancing the specific goals of each entity.
The notion of individual intelligence holds significant importance in the development of the concept of collaborative intelligence. The three most potent forms of intelligence are individual, collective, and collaborative intelligence. Numerous formal and informal definitions of intelligence have been proposed across several scientific disciplines, encompassing computer science, psychology, and philosophy. These definitions have failed to present a comprehensive depiction of intelligence. However, researchers and scientists in different areas commonly identify certain standard elements and properties shared among the basic semantics of the intelligence concept and its inherent characteristics. Certain psychologists conceptualize intelligence as a confluence of elements comprising an extensive breadth of domain-specific knowledge, cognitive agility, and aptitude for logical reasoning. Psychologists use the phrases "fluid intelligence" and "crystallized intelligence" to delineate these features.
"fluid intelligence" describes the cognitive capacity for flexible reasoning and thinking. On the other hand, the term "crystallized intelligence" refers to gathering and keeping knowledge, factual information, and skills for an individual’s lifetime[1].
Debenham (1989) proposed an extension to the conventional understanding of intelligence by providing a taxonomy with eight distinct categories: linguistic, logical/mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and natural intelligence. His argument challenges the conventional notion that cognitive capacity, sometimes called "g" intelligence, is the single determinant of intelligence.
Although earlier conceptualizations of intelligence mostly center around human beings, artificial intelligence (AI) uses computers and robots to replicate the cognitive abilities related to problem-solving and decision-making characteristic of the human mind. Various definitions of artificial intelligence (AI) have emerged in recent decades. [2] propose the following definition: "AI is the field of study and application that focuses on the development of intelligent machines, particularly intelligent computer programs." Artificial intelligence (AI) can be understood as the pursuit of leveraging computational systems to emulate human intelligence, with the distinction that AI is not necessarily constrained to replicating observable physiological processes [3]. Examining natural and artificial intelligence requires a comprehensive understanding of the interconnections between information, knowledge, and behavior.
According to [4], intelligence is the capacity to acquire knowledge about both human brains and artificial systems. Most scholars in artificial intelligence have presented arguments supporting the notion that intelligence encompasses the subsequent characteristics: Intelligence is a fundamental attribute exhibited by an entity or agent that engages in interactions within a specific environment, enabling it to navigate and respond to diverse circumstances effectively. The term "effectiveness" frequently signifies the capacity of an entity to effectively accomplish an objective or attain a shared goal, taking into account predetermined criteria and preferences. Engaging and interacting with other entities is crucial to promote individual learning, adaptability, and flexibility, enabling the entity to respond effectively to various scenarios [5]. Any entity can detect its surroundings and make decisions to maximize the likelihood of reaching its goals.
The intelligence mentioned above is attributed to a certain individual or entity. On the other hand, collaborative intelligence encompasses various entities or individuals engaged in the cognitive process. In contemporary times, the business world is characterized by a heightened degree of dynamism and a rapid rate of digital transformation, placing a significant emphasis on the capacity of organizations to navigate and respond to these evolving circumstances effectively. The predominant form of intelligence that will hold the highest value in a short time is a combination of human and computer intelligence, not solely individual intelligence. Collaborative intelligence researchers believe that a new level of comprehension emerges when diverse minds come together.
Consider a scenario where a group collaborates to develop a presentation to make a favourable impression on a prospective client. In this scenario, each participant will contribute their respective general individual intelligence, which, when amalgamated, will yield what is commonly referred to as general collective intelligence. This approach exhibits a significantly elevated level of exertion, resulting in a collective intelligence that surpasses any individual within the group.
Resolving various challenges in private and public organizations sometimes necessitates the collaboration of several entities, namely information systems. This collaboration aims to achieve an outcome that optimizes a broader public objective or utility while also considering the individual goals of every organization involved. Individual problems or challenges can be argued to require tailored solutions. Therefore, adopting a novel collaborative approach presents a viable alternative that can facilitate the generation of innovative solutions by amalgamating the diverse contributions and intelligence of each participating organization.
The term "collaborative intelligence" ( C I n ) will characterize this collaboration. Therefore, collaborative intelligence is newly emerging from the collaboration of multiple distributed intelligences and is characterized by its ability to reach a consensus in decision-making processes. While there exist multiple definitions of CI, most of these definitions do not attempt to define intelligence as a concept explicitly. Consequently, they align with other definitions consistently. First, we must explain the concept of intelligence to understand the new concept of C I n .
The structure of this article is as follows: The second section comprises an overview of the background and previous work. The third section provides a comprehensive overview of the many levels of observation in the Information Environment. The definition of Intelligence is presented in the fourth section. The sixth section presents the Intelligence: Ontological view-based model. The sixth section provides a comprehensive overview of the various forms of intelligence. The seventh section contains the aspects of the framework and definition of Collaborative Intelligence ( C D I E ). The conclusion is delivered in the eighth section.

2. Background and Related Work

2.1. Conceptualization and Ontological View

Within the realm of conceptual structures and their representation through ontological views, a multitude of frameworks have been introduced. Our research adopts a comprehensive framework as explicated in [6]. This framework offers an extensive methodology for depicting the state of the universe, grounded in a well-defined conceptual structure and supported by a language that enhances this depiction. Moreover, we leverage the ontological View from this framework to discern and interpret the multifaceted forms of intelligence. Our inquiry into intelligence forms is deeply rooted in this theoretical framework, establishing a sophisticated foundation for discerning the categorization and existence of intelligence within the cosmos.
Conceptualization is often perceived as an abstract and simplified version of individual knowledge regarding a particular matter or the cosmos at large [7]. It underscores the mental blueprint of reality, emphasizing the intrinsic value of conceptions through their correspondence with the realities they denote.
Conversely, ontology delves into the essence of existence, the constituents of reality, and the interconnections among entities. It is characterized in [7] as "A formal, explicit specification of a shared conceptualization." Here, ’conceptualization’ is construed as an abstract framework of domain-specific knowledge within the universe, pinpointing the pertinent concepts of that domain [8].
The term ’shared’ implies that an ontology encapsulates consensual knowledge, signifying its acceptance by a collective. ’Explicit’ refers to the precise definition of concepts within an ontology and the constraints applied to them. ’Formal’ denotes that the ontology is structured to be machine-interpretable. It is pertinent to note that consensus on ontology is feasible primarily in an environment that is both closed and static. In contrast, an ontological view ( O V ) is apt for a closed yet dynamic setting, wherein each view is formally and explicitly articulated [6].

2.2. Intelligence: Classification of environment

The development of intelligence ( I n ) and collaborative intelligence ( C I n ) requires a thorough understanding of the attributes of the environment. The customization of the C I n framework should be tailored to accommodate the distinctive characteristics of the context in which the intelligences are intended to be implemented. Categorizing environments based on observable characteristics is a prevalent approach within artificial intelligence (AI). The classification of environments is determined by the functional capabilities of artificial intelligence agents operating within these particular environments. Agents have the ability to observe and engage in collaboration with their counterparts within these given environments.
The classification of information environments mainly relies on the constituent elements they encompass rather than the agents’ skills. Theoretically, this classification framework offers a more comprehensive view since it enables systematic categorization of diverse contexts within artificial intelligence. The essentially lies in the consideration that the universe can be understood as an environment characterized by a set of guiding principles that regulate its constituent elements. The classification of environments in computers and information is determined by three key aspects. There are three key factors that need to be considered: 1) the extent of authority and decision-making power, 2) the level of distribution of decision-making and intelligence, and 3) the state of the environment, specifically any changes in its structure in terms of entities and relationships between them.
  • Centralized and Decentralized Environment
    The first factor evaluates whether the environment exhibits characteristics of centralization or decentralization, whereas the next factor determines the presence of a distributed environment. Centralization and decentralization in an information environment refer to an organization’s strategies to govern and disseminate its information resources, systems, and authority to make decisions. The notions above affect how knowledge and data are spread throughout an organization. In contrast, "distributed" commonly denotes the spread or allocation of information, data, or computing resources among different locations, systems, or entities. [9].
    Centralized environments are characterized by the concentration of power, decision-making, and authority in a singular human or geographical location. Every individual entity (i.e., a system component) possesses a finite level of autonomy. As [9] stated, the entity cannot make independent decisions unless it is governed by the authority and influence of the most dominant entity.
    The decentralized information environment refers to a type of environment that lacks a central coordinating or governing organization. The collection consists of many independent entities, which may be situated in the same or disparate geographic locations. An autonomous entity is characterized by its capacity to operate autonomously to accomplish its objectives [10]. In other words, each autonomous entity represents an information system in which no single entity is the exclusive authority. Each entity in such environments can perceive and capture the environment’s state and develop intelligence about it (typically equipped with its intelligence). It can make decisions locally, and each entity can choose how to use these local intelligence resources to fulfill that entity’s objectives. However, there is a freedom of action within each entity, which guarantees that no single node has complete intelligence.
  • Distributed Environment
    In this case, the categorization of the environment is predicated on the second factor. A distributed information environment can encompass systems that are either geographically dispersed or locally located. One distinguishing characteristic of a distributed environment is its collaborative utilization of various entities, nodes, or components responsible for overseeing and executing the processing and administration of resources and data. These entities can exist in a shared physical space, such as a data center, or they can be distributed across several geographic locations, including distinct data centers in different cities or countries. [11].
    The apparent paradox of a system being distributed and centralized can be resolved by analyzing the definitions of location and control. A distributed system consists of multiple software components that are physically spread across various entities (computers), but operate together as a cohesive system. In a distributed system, entities can be geographically far and connected by a wide area network or physically close and linked through a local network[12].
    Let us contemplate a cloud service enterprise that offers data storage solutions. In terms of physical implementation, the data have the potential to replicate and distribute on various devices, taking into account the availability and resilience of the resources (distributed)[13]. Nevertheless, irrespective of the geographical location of the equipment and data storage facilities, the cloud service provider assumes centralized management over them. On the contrary, the notion of a decentralized and distributed system may appear rational. Bitcoin will be used as our illustrative case. Bitcoin is a decentralized system characterized by immutability, meaning that any entity cannot alter it. Furthermore, it functions as a distributed global peer-to-peer network of autonomous computers.
  • Open and Closed Environments
    Prior studies, such as those referenced in [14], indicate that the observable environment may present itself as either closed or open. The genesis of states within these environments is often a consequence of modifications in their fundamental components. As delineated in [14], the classification of environmental states falls into two primary groups: identical and varying. By scrutinizing the observable alterations in entities and their interrelations, one can anticipate the classification of an environment. This predictive capability is crucial for understanding the dynamics of open versus closed systems and their respective states.
Preprints 90391 i001
In dynamic environments, entities and relationships exhibit varying degrees of flexibility, allowing for the entrance or departure of any entity within the specified environments. The designation employed to characterize these environments is that of open universes ( O E ). Conversely, in closed environments that exhibit a consistent number of entities, entry and exit of these entities are rigorously prohibited, and modifications are only limited to the relationships among the entities.
These specific categories are commonly known as closed universes ( C E ). Closed universes can be classified into two main categories: static and dynamic. Entities and relationships within the static closed environment ( S C E ) demonstrate stability and remain unchanged without any alterations. On the contrary, in a dynamic closed environment ( D C E ), the entities remain fixed while their relationships demonstrate flexibility. The main aim of this work is to investigate the dynamic closed environment ( D C E ), which includes the static closed environment ( S C E ) as an essential element.
In this context, [14] proposed a model for distinguishing between open and closed environments. The classification of the environment as closed or open can be ascertained through an analysis of the determinant of W. The environment’s categorization, whether open or closed, can be identified by examining the determinant of W. More precisely, the determinant of W can take on one of two specific values: either equal to one or greater than one. The determinant of S t is equal to one, signifying that all environment states are similar, suggesting a closed and unchanging environment. On the contrary, if the determinant of S t is more than one, it indicates that at least one state in the environment differs from the rest. Hence, the categorization of the environments can be classified as either closed dynamic or open, as depicted above:

2.3. Collaborative Intelligence (CIn)

The notion of collaborative intelligence is identical to the definition of intelligence, and different meanings have been given depending on the study discipline from which the term derives. How different entities work and collaborate in the environment is constantly evolving. C I n is one of the most effective ways to respond to these changes. Collaborative intelligence ( C I n ) ) is intelligence built in a digital world. It comprises a group of individuals who share intelligence and participate in structured deliberations.
Fundamentally, C I n ) extends the concept of intelligence from the individual to the group. It has existed for a very long time. However, the emergence of new technologies that connect an increasing number of individuals over longer distances to share knowledge and abilities has revolutionized what may be accomplished. Distributed systems are distinguished by an additional category of intelligence called collaborative intelligence. In this category, collaborative intelligence is exhibited through multi-agent, distributed systems where each agent is autonomously positioned to contribute to a problem-solving network [15].
C I n ) can be more or less than the sum of an individual’s intelligence, depending on the forms of intelligence each individual possesses. The conventional understanding of collaborative intelligence is the intelligence shared by a group of individuals. Thus, it is formed by the overlap of separate intelligence sets. The overlap can include a smaller or more significant portion of these intelligence sets and encompass all imaginable intelligence forms.
The notion of Collective Intelligence ( C I ) was first presented in [16], in which he claims that "Collective behaviour denotes any cooperative venture in which individuals pool their resources to maximize task completion." CI is an acronym for "collective conduct" [17]. Initially, the primary objective of CI researchers was to examine how groups of individuals act and think "as a whole," e.g., through various coordinating and decision-making strategies [18].
Collaborative Intelligence and Collective Intelligence share a common origin in studying natural and social ecosystems. Measures an agent’s ability to receive and comprehend new information and share resources, information, and essential duties with others. " Additional partners to tackle new local and global challenges in a dynamic environment" or "Collaborative Intelligence is a combined measure of an agent’s collaborativeness and adaptability in dealing with the emergency" [19]. Several aspects of these definitions can be highlighted[20]:
  • These definitions do not attempt to describe the concept of intelligence itself; therefore, they are consistent with all other definitions of intelligence.
  • Because the definition of intelligence includes the term "acting," it stipulates that intelligence must be demonstrated in some behavior. According to this definition, for example, an article on Wikipedia would not be deemed intelligent in and of itself; however, the people who generated it would be intelligent.
  • The definition demands that individuals behave collectively or that their activities are connected. Two unrelated people in different cities brewing coffee on the same morning indeed is not collective intelligence, and two coffee shop servers working together would be. Individuals’ actions must be related, but they need not cooperate or have the same goals. Different market actors purchase and sell to each other; therefore, their actions are connected, yet they may have different purposes.
  • At the group level, it is typically significantly more crucial for an observer to attribute group aims.

3. Levels of Observation in the Information Environment

The way an observer engages with an observation of the environment has a crucial part in defining the results of the environment, thereby influencing the methods and ways used to understand that environment. The selection of the observation level substantially impacts the anticipated results within this particular environment. Hence, the nature of the observation can be classified as either reality, about the physical world, or as extensional, referring to the domain or range of the observation. It can also be defined as abstract, involving concepts or ideas rather than concrete entities or events. The following three subsections will provide additional explanations about these levels, as we saw from our perspective.

3.1. Reality Level: Existences and co-existences

This level, also known as the granularity level, represents the universe’s state where both entities (existences) and relationships (coexistence) are present. Every entity possesses one or more intrinsic attributes, with at least one being distinct and indicative of existence. These entities and their relationships exist within a specific environment, considered an observed universe. This categorization divides the observable universe as having a single state or several states, depending on the alterations in its constituent elements (entities and relationships).
The shown Figure 4 illustrates a lower level. The focus of this Work will be centred around a dynamic, decentralized, and self-contained environment. The universe is classified as closed since its entities and relationships are unchangeable. The authority is dispersed as no dominant entity is exercising influence over other entities. The environment is dynamic because the relationships between the entities are changeable. Furthermore, as previously stated in section 2.1 titled "Intelligence: Classification of environment," the environment can be categorized at this level.

3.2. Extensional Level: Intelligence and its Forms

At this level, intelligence development, sometimes called observer universe (intelligence space), will depend on the state of the observed universe, which reflects the entities and their relationships within reality (observed universe). Each observer universe may possess three distinct manifestations of intelligence, with variations observed between different intelligences owned by observer universes. The term "intelligence" encompasses the various forms rather than solely relying on observable behaviour to elucidate and structure this intricate concept. Although the emergence of separate states depends on alterations in the relationships between entities in the environment at the extensional level, various forms of intelligence will arise at this level due to these states, as explained in the "forms of intelligence" section.

3.3. Abstraction Level: collaborative intelligence

At this stage, the observation of the environment becomes more abstract and less tangible compared to previous levels of observation. The abstraction level refers to a simplified representation of reality where numerous distributed observer universes (intelligence spaces) coexist, each possessing its intelligence, as illustrated in Figure 4 above. As a result of the abstract observation, the process of semantic integration will occur within distributed intelligences, leading to the emergence of a new form of intelligence. This level facilitates making decisions and attaining shared objectives that surpass the individual aims of each universe alone.

4. Intelligence: Definition

In a previous study [14], we presented the definition of intelligence employed in this paper. We define intelligence as capturing the universe’s state in terms of the changes in its elements (generating three distinct forms: data, information, and knowledge). Intelligence, in its most comprehensive sense, refers to a universe capturing the state of another universe. This involves converting the captured state into data, which is then further transformed into information and ultimately transformed into knowledge. This transformation will be achieved through conceptualization and the adoption of ontological perspectives. Figure 1 illustrates the correlation between intelligence and its manifestations concerning the conceptualization and ontological view.

5. Intelligence: Ontological view-based model

In a prior study [14], we investigated intelligence as a model I n < D t , I, K>which it has three components: D t represents the first form of intelligence, known as Data, I represents the second form, known as Information, andK represents the third form, known as Knowledge. The relationship between the intelligence model and conceptualization is illustrated in Figure 2.
This work centres on the facets of intelligence concerning the universe’s current state. A universe state can be associated with an ontological view ( O V ) for a conceptualization structure that the language supported. A conceptualization is an abstract and simplified view of the universe we aim to depict for a specific purpose [21]. An ontology is a specific and formal representation of a commonly understood notion of a specific domain [22]. It possesses a formal way, as it can be comprehended and interpreted by machines. Regarding conceptualization, O V can serve as a foundation for two stages of modification of ’intelligence’ forms.
In a previous study conducted by [14], we considered intelligence as a model " I n < D t , I, K>" represented by three components: ’ D t ’ represents the first form of intelligence referred to as Data, ’I’ represents the second form known as Information, and ’K’ represents the third form referred to as Knowledge. The correlation between the intelligence model and conceptualization is depicted in Figure 2. This study focuses on the various aspects of intelligence related to the existing state of the the universe. A universe state can be associated to an ontological view ( O V ) on the conceptualization framework that the language facilitates. A conceptualization refers to an abstract and simplified representation of the universe that we intend to represent for a particular purpose [21]. An ontology is a precise and structured depiction of a universally recognized concept within a certain domain [22]. It has a formal structure that can be understood and processed by machines. Concerning conceptualization, O V might serve as a basis for the two stages of intelligence forms’ development.
  • Firstly, it provides the interconnected information on two parts of semantics:
    -
    The connection between the state and the appropriate conceptualization structure.
    -
    The data is associated to the relevant specification through the use of a supporting language.
  • Additionally, the association axioms of O V facilitate the transformation of knowledge in a manner that enables direct reasoning, hence enhancing the actionable form of intelligence.
Prior to discussing intelligence modelling, this section will elucidate certain terminologies.
  • Captured state: refers to the current snapshot of the environment’s structure, represented by entities and relationships.
  • Data: is a form of raw intelligence that has been obtained by converting the captured state, but has not been organized or processed in any way.
  • Information:is the second form of intelligence that can be generated through data transformation. Information is data that has been organized, structured, and presented meaningfully.
  • Knowledge: Knowledge is the third form of intelligence that can be formed through the transformation of information. Knowledge is explicit information retrieved from implicit information by adding rules (axioms).
  • The Observer Universe (also known as U i n -Intelligence space) is where forms of intelligence will develop.
  • The Observed Universe, known as ( U r -Reality), is where the existences known as “entities” and the coexistences known as “relationships” between them exist. This universe can be broken down into two subcategories based on the possible alterations to its states (entities and relationships).
  • Converting function: It is the process of expressing the conversion of the captured state formats into other formats, representing the first form of intelligence (data). However, the universe (intelligence space) may keep the same formats without any changes.
  • Transformation function: it is a function that takes a form (information) and produces an output that has been transformed into another form (knowledge).
  • Decentralized Environment: is often characterized by decentralized control, which means that there is no single controlling entity or authority. This can contrast with centralized environments, where a single central authority or entity makes decisions. Entities in a decentralized environment are often distinguished by their ability to perceive their environment. They are autonomous entities that can operate independently and make their own decisions without external direction or control.

6. Forms of Intelligence

Capturing the state of the universe (U-Reality) is the initial phase in the development of various forms of intelligence, according to [14]. The state may consist of facts from many sources, such as sensors, user input, devices, and additional systems.
In [14], various forms of intelligence are discussed. In the subsequent steps, we will provide a concise overview of these forms, beginning with the first form (data) and concluding with the last form (knowledge). Each form will be accompanied by a formal representation, as described in [14].
  • Captured state: To analyze the distinctions between "conceptualization" and "capturing the state of an environment" in the context of observation, it is necessary to comprehend how each concept relates to the act of observing and the outcomes of this observation. The correlation between conceptualization and the representation of the state of an environment is intricately related to the observation process, although its objectives may differ depending on the observation being carried out.
    [ Proposition 5 . 1 ] : Assuming C =<D, W, R >as a conceptualization structure representing entities and relationships within a decentralized environment, with a conceptualization structure defined as an abstract view of the environment, we can infer that this structure provides a higher-level perception of the observed environment.
    In contrast, the fundamental aspect of the environment’s structure, denoted as St =< G , S , T >, represents the environment’s actual manifestation. This structure corresponds to a lower-level perception, wherein entities and their relationships coexist within the same framework.
    Figure 3. shows different ways to observe U-Reality
    Figure 3. shows different ways to observe U-Reality
    Preprints 90391 g003
    The state structure comprises three elements: G , representing the domain of the universe of reality, including entities and relationships; S , representing the possible states; and T , representing the intensional relations. The constituent elements of the conceptualization framework have been delineated in [6].
    The correlation between the conceptualization structure, representing a high level of understanding of the universe of reality [6], and the state structure resulting from our approach, indicating a low level of understanding of the universe of reality, is depicted in Figure 2.
    The captured states can be classified into two separate structures based on the basic alterations that will take place in the elements of the reality universe. Conversely, they are confined to the observer universe. These structures may exhibit variations or resemblances.
  • First form of intelligence (Data): It can be reached by converting a function that converts the captured state of reality ( U r ) into this form within the intelligence space universe ( U i n ). The converting function follows:
    C f ( S t r U r ) = D t U i n
    where S t r denotes the state of the ( U r ), which is S t r =< G , S , T >where D t U i n represent data within the intelligence space and has the same structure of the captured state where:
    C f ( S t < G , S , T > ) r U r ) = D < G , S , T > t U i n D t U i n = < G , S , T > .
  • Second form of intelligence (information): Data alone is insufficient and unintelligible. So, it is necessary to add meaning to the data. Adding semantics derived from O V transforms data within U i n into information. U i n may have a transformed function that allows it to do this:
    T f 1 U i n < D U i n , O V > = I U i n
    where D U i n represents form1 within the observed universe, and O V is an ontological view that enables the transformation from data to Information I U i n denotes Information.
  • Third form (knowledge): U i n may be equipped with a function to extract new implicit information from existing information (explicitly). The function will employ axioms derived from the ontological view (these axioms are a set of logical formulae). Due to this knowledge, the intelligence space can make wise decisions and solve problems. The function of transformation can be expressed as follows:
    T f 2 U i n < I U i n , O V > = K U i n
    where I U i n is explicit information possessed by U i n , and O V will be applied to extract axioms used in deducing. The if-then statement is a common example of deductive reasoning. Using logic, if A = B and B = C , then A must = C . However, because the FOL was used as a logic language to specify data, its deduction rules will also be applied. The three equations (1, 2 and 3) represent the three forms of intelligence (Data, information and Knowledge, respectively).
    Figure 4. shows the nature of intelligence and its form.
    Figure 4. shows the nature of intelligence and its form.
    Preprints 90391 g004

7. Collaborative Intelligence (CIDE): Framework and Definition Aspects"

Throughout human history, collaborative intelligence has been present in various forms such as families, corporations, nations, militarise, and other groups. The Google search engine exemplifies the emergence of collaborative intelligence. The Page-Rank algorithm analyzes a large volume of web links generated by millions of users to determine the popularity and usefulness of web pages. Wikipedia represents a novel system of collaborative intelligence. Thousands of volunteers from around the world have been recruited to collaboratively create an extensive and very high-quality intellectual product with minimal centralized oversight. C I D E is an expression of intelligence that arises from the various scattered intelligences present in the environment.
The consensus among researchers is that collaborative intelligence emerges from the collective actions of individuals; however, it can also emerge from integrating many forms of distributed intelligence. Furthermore, while considering collaborative intelligence, it is crucial to note that many definitions unintentionally overlook a fundamental aspect—the definition of intelligence itself—while emphasizing the collaborative element. It is important to emphasize that a comprehensive comprehension of intelligence in its various aspects and manifestations is crucial for establishing collaborative intelligence. Hence, this aspect should not be disregarded or neglected. In order to establish a comprehensive framework for collaborative intelligence, it is essential to conduct a thorough analysis of the constituent aspects of intelligence, carefully evaluating its complex dimensions and unique characteristics. A thorough comprehension of the combined benefits that emerge from the collaboration of various forms of intelligence can be achieved by clearly defining the idea of intelligence.
The oversight of defining intelligence within collaborative intelligence definitions not only leaves a critical gap in understanding but also limits the scope and applicability of the concept. Without a comprehensive grasp of intelligence, the essence of collaboration remains elusive, preventing a holistic exploration of how multiple intelligences combine to achieve superior outcomes. In essence, collaborative intelligence can only be fully understood or harnessed by first addressing its foundation’s underlying intelligence.
Therefore, when we begin our exploration of collaborative intelligence, it is crucial to fill this significant void by crafting a thorough, accurate, and complex definition of intelligence. The first phase is vital as it establishes a solid basis for a comprehensive investigation of collaborative intelligence, thus unleashing its immense potential in several fields. At the beginning of this paper, we have already presented a thorough definition of intelligence and its forms.
C I D E is a manifestation of intelligence that emerges from the different distributed intelligences in the environment. It is a new form of intelligence that emerges from various distributed intelligences in a decentralized environment. In the overlap of these distributed intelligences, this form does not exist. However, the emergence of C I D E is not valuable if these distributed intelligences overlap entirely. “It is a new form of intelligence that emerges from multiple distributed intelligences in a decentralized environment without pre-design. The framework for collaborative intelligence is depicted in Figure 5. This framework is comprised of three core elements that contribute to the emergence of C I D E , and they are as follows:
  • A set of decentralized intelligences, each of which will be individually developed, relying on the unique state that has been captured for the same domain of interest mentioned in section 6.
  • A Semantic integration layer will employ an ontological view to find intersections among distributed intelligences in a decentralized environment, making a significant contribution.
  • A collaborative intelligence layer will yield substantial advantages by constructing a logical framework (a reasoning system) that enhances the efficiency of decision-making, a capability that individual intelligences cannot attain in isolation.
C I D E can be defined as the manifestation of intelligence that emerges from the various intelligences distributed throughout the environment. This concept refers to a novel type of intelligence that arises from many intelligences operating in a decentralized environment. However, there must be only a partial overlap between these distributed intelligences for this form to exist. If the overlap is complete, this form of intelligence does not emerge. The formal definition of C I D E is as follows: " C I D E is an emergent form of intelligence that emerges from the collaborative intelligence of multiple distributed intelligences in a decentralized environment, without any predetermined design." Figure 3 depicts the formation of C I D E resulting from the convergence of overlapping intended intelligences.
Figure 5. demonstrates the emergence of Collaborative Intelligence.
Figure 5. demonstrates the emergence of Collaborative Intelligence.
Preprints 90391 g005

7.1. Categories of Overlap among Distributed Intelligences

In this context, "overlap" pertains to the intersection of intended intelligences. These intended intelligences emerge from the intersection of numerous distributed intelligences originating from various intelligence spaces. As previously mentioned, our perception and comprehension of intelligence play a pivotal role in developing collaborative intelligence.
In our endeavour to develop intelligence and collaborative intelligence, we have adopted an approach closely aligned with the one outlined in the reference [23]. This particular study [23] involves the construction of intended models within a conceptualization framework. The primary focus of our study is information, considered the second form of intelligence. By employing semantics by adding it to data, which represents the first form of intelligence, we can effectively transform it into significant and essential information. This transformation is rooted in the ontological view used in the conceptualization model.
We base our assumption on utilizing the conceptualization model and applying an ontological view to determine the intersection between intended intelligences. To streamline our concept, we can assume the existence of two observer universes called intelligence spaces, namely U i n 1 and U i n 2 , which will each capture distinct states of the observed universe, referred to as reality U r . The intelligence spaces will develop their intelligence model by utilizing different ontological views, namely μ F and π s .
The intended models within ontological views μ F and π s align with a second form, denoted as the intended intelligences, in each intelligence space ( U i n 1 and U i n 2 ). ı HF ( L 1 ) denotes the first set of intended intelligences generated by the first intelligence space through μ F . Conversely, ı HS ( L 2 ) signifies the intended intelligences of the second intelligence space, employing π s .
Subsequently, the potential overlap of distributed intelligences, which are categorized into three distinct classes based on the intersection of their intended intelligences, will be delineated:
  • [Definition 7.1.1 Partial Overlap] Given two sets of intended intelligences, the first intended intelligence, denoted as ı HF ( L 1 ) , pertaining to language L 1 , is derived from the first ontological view, represented as μ F . Similarly, the second intended intelligence, denoted as ı HS ( L 2 ) , associated with language L 2 , is derived from the second ontological view, represented as π s . μ F is Partially Overlapping (symbolized by Ξ ) with π s if only if ı HF ( L 1 ) partially intersect (⋈) with ı HS ( L 2 ) . In other words, the intersection between these two intended intelligences is true (not empty) and unequal.
    ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( μ F Ξ π s ) ( ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( ( ı H F ( L 1 ) ı H S ( L 2 ) ı H F ( L 1 ) ) ( ı H S ( L 2 ) ı H F ( L 1 ) ı H S ( L 2 ) ) ) ( μ F Ξ π s ) ( ( ı H F ( L 1 ) ı H S ( L 2 ) = ) ( ı H F ( L 1 ) ı H S ( L 2 ) ) ) ( μ F Ξ π s )
  • [Definition 7.1.2 Completely Overlap] Given two sets of intended intelligences, the first intended intelligence, denoted as ı HF ( L 1 ) , pertaining to language L 1 , is derived from the first ontological view, represented as μ F . Similarly, the second intended intelligence, denoted as ı HS ( L 2 ) , associated with language L 2 , is derived from the second ontological view, represented as π s . μ F is Completely Overlapping (symbolized by ⊡) with π s if only if ı HF ( L 1 ) completely intersect (⋉) with ı HS ( L 2 ) . In other words, the intersection between these two intended intelligences is equal to whole elements in these intended intelligences; the two ontological views either overlap completely or partially.
    ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( μ F π s ) ( μ F Ξ π s ) . ( ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( ı H S ( L 2 ) ı H F ( L 1 ) ) ) ( μ F π s ) ( μ F Ξ π s ) . ( ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( ı H F ( L 1 ) = ı H S ( L 2 ) ) ) ( ( μ F π s ) ( μ F Ξ π s ) ) .
  • [Definition 7.1.3 Non-Overlap] Given two sets of intended intelligences, the first intended intelligence, denoted as ı HF ( L 1 ) , pertaining to language L 1 , is derived from the first ontological view, represented as μ F . Similarly, the second intended intelligence, denoted as ı HS ( L 2 ) , associated with language L 2 , is derived from the second ontological view, represented as π s . μ F is non-Overlapping (symbolized by ⊠) with π s if only if ı HF ( L 1 ) is not intersect (⋊) with ı HS ( L 2 ) . In other words, the intersection between these two intended intelligences is empty (); the two ontological views will not overlap.
    ( ı H F ( L 1 ) ı H S ( L 2 ) ) ( μ F π s ) ( ı H F ( L 1 ) ı H S ( L 2 ) = ) ( μ F π s )
Figure 6 illustrates the categorization of the overlap of many distributed intelligences in a decentralized environment, which occurs when their intended intelligences intersect.
Figure 6. Displays the degree of overlap and models of languages, approaches, Intended models, and CI.
Figure 6. Displays the degree of overlap and models of languages, approaches, Intended models, and CI.
Preprints 90391 g006

7.2. Foundations of Collaborative Intelligence: A Theoretical Overview

In the context of developing a collaborative intelligence framework within a decentralized environment, the role of Semantic Integration plays a pivotal role. While in centralized environments, ontologies are suitable for information systems as they are embedded implicitly within the software component, the scenario changes in distributed or decentralized environments. In such environments, each system holds its unique ontological view based on its semantics.
Before delving into the intricacies of collaborative intelligence (referred to as C D I E ), it is imperative to establish precise definitions and terminologies. These definitions not only serve as a foundation for understanding C D I E but also derive from the ontological view. The following content presents formal definitions of relevant terms and additional definitions rooted in ontological views
  • [Definition 7.2.1 Intended structure] For each possible state s S , the intended state of s according to D t is the structure M s d t =< G , T s d t >, where T s d t = p ( d t ) | p T is the set of extensions (relative to d t ) of the elements of T . M d t = M s d t |s D t denotes all the intended intelligence structures of M .
    This definitions presented in the following sections 7.2.2, 7.2.3, 7.2.4, 7.2.5, and 7.2.6 have been derived from a framework that aligns with conceptualization and ontology [24]. However, slight adjustments have been made to these definitions to suit my research requirements.
    [Definition 7.2.2 Model of Language] For a given ontological view , logical language, with a vocabulary V L , we can define a model for L as a structure < S, I>, where S = < D t , T > is a state structure and I: V L G T is an interpretation function assigning elements of G to constant symbols of V L C and elements of T to the predicate symbol of V L P .
    [Definition 7.2.3 Ontological Commitment] For a given ontological view, H = < d t , > is an ontological commitment for L, where d t = < G , S , T > is a data and : V L G T is intensional interpretation. This interpretation is a function assigning elements of G to constant symbols of V L C , and element of T to predicate symbols of V L P .
    [Definition 7.2.4 Ontology] Given ontological view, a language L with ontological commitment H , an ontology for L is a set of axioms designed in a way such that the set of its models approximates as best as possible the set of intended models of L according to H .
    [Definition 7.2.5 Compatible] Given the ontological view, a language L with a vocabulary V L , and an ontological commitment H = ( D t ,) for L, a model (S,L) will be compatible with H if: i) S ∈ S D t ; ii) for any constant symbols c V L C , I(c) = (c), where I is an extensional interpretation and is an intensional interpretation; iii) there exists some d t D t such that, for all predicate symbol v ∈ V L P , I(v) = ((v)) ( d t i.e., there exists a conceptual relation p such that (p) = p p ( d t ) = I(p).
    [Definition 7.2.6 Intended intelligence (a second form of intelligence)] The set ı H ( L ) (i.e., information, “Info”) of all models of L that are compatible with H will be called the set of intended intelligences of L according to H . ı H ( L ) will signify the intelligence intended for this study.
[Definition 7.2.7 vocabulary] Identifying the terminology this research uses is essential to supporting collaborative intelligence between two systems. Figure 5.15 presents a comprehensive compilation of many vocabularies associated with items related to this research. In the given context, we consider two logical languages, designated as L 1 and L 2 . Each of these languages is characterized by a specific vocabulary, denoted as V L 1 and V L 2 , respectively.
V L 1 = { V L 1 C , V L 1 P } , V L 2 = { V L 2 C , V L 2 P }
Each ontological view possesses vocabulary, the first ontological view V μ F , and the second ontological view V π s , which each one consists of constant symbols V μ F C , V π s C and predicate symbols V μ F P , V π s P .
V μ F = { V μ F C , V μ F P } V π s = { V π s C , V π s P }
Moreover, every intended intelligence ı HF ( L 1 ) and ı HS ( L 2 ) comprises a set of two vocabularies; the first vocabulary is situated within an intersection area ( V ı HF ( L 1 ) , V ı HS ( L 2 ) ) while the other vocabulary is positioned outside the intersected area ( V ı HF ( L 1 ) , V ı HS ( L 2 ) ) but within the intersected intended intelligence.
V μ F = { V ı HF ( L 1 ) , V ı HF ( L 1 ) } V μ F = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P , V ı HF ( L 1 ) C V ı HF ( L 1 ) P }
V π s = { V ı HS ( L 2 ) , V ı HS ( L 2 ) } V π s = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P , V ı HS ( L 2 ) C V ı HS ( L 2 ) P }
A vocabulary of intended intelligence will consist of the following terms:
V ı H ( L ) = { V ı H ( L ) C , V ı H ( L ) P }
Moreover, each intended intelligence vocabulary V ı HF ( L 1 ) and V ı HS ( L 2 ) comprises constant symbols and predicate symbols, where the first intended intelligence vocabulary:
V ı H F ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P }
And the second intended intelligence vocabulary:
V ı H S ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P }
The partial overlap between these ontological views indicates that their intended intelligences will also overlap; the vocabulary of the first intended intelligence can be divided into vocabulary within and outside the overlapping area. The vocabulary in the intersection region is V ı HF ( L 1 ) , V ı HS ( L 2 ) . In contrast, the other vocabulary is positioned outside the intersected area V ı HF ( L 1 ) , V ı HS ( L 2 ) but within the intersected intended intelligence.
V ı HF ( L 1 ) = { V ı HF ( L 1 ) , V ı HF ( L 1 ) }
V ı HF ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P }
V ı HF ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P }
V ı HF ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P , V ı HF ( L 1 ) C , V ı HF ( L 1 ) P }
V ı HS ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P }
V ı HS ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P }
V ı HS ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P , V ı HS ( L 2 ) C , V ı HS ( L 2 ) P }
The vocabulary of the first intended intelligence can be categorized into two groups: vocabulary that falls inside the intersection area and outside of it. The vocabulary present in the intersection area can be denoted as ( V μ F , V π s ) . On the contrary, the remaining set of vocabulary is situated beyond the overlap area ( V μ F , V π s ) , yet still falls within the intended area of intelligence, as indicated at points 4 and 5 above.

7.3. Overlapped Intended Intelligence Concept

In the context of a static and centralized information system, the semantics and interpretation of the system are implicitly embedded within the software component. In the context of dynamic distributed/decentralized information system environments, it is not practicable to establish a universally accepted ontology before developing each individual system. Therefore, the semantics of each system within a particular business domain can be understood as an ontological view of the same conceptualization.
At the start of the C I n framework development procedure, we examine the properties of partially overlapping (shared) intended intelligence and the correlation between intended intelligence and ontological view. While the intended intelligences will be derived from ontological view, they are compatible with the conceptualization model’s intended models. The intersection of many ontological views should involve overlap between the intended intelligences. Before to delving into the exploration of any overlapping view model, it is imperative to clarify the properties that emerge from the assumption of overlapped intelligence, as depicted in Figure 6. Figure 7 illustrates that the designated intelligence vocabulary V ı HF ( L 1 ) consists of both constant and predicate symbols.
According to the definition provided in Section 7.2.7, the symbol V μ F denotes the initial ontological view of the vocabulary. The present theorem can be deduced about the correlation between V μ F and V ı HF ( L 1 ) utilizing the underlying premise of intentional overlapped intelligence.
[Theorem 7.3.1] Given a first ontological view μ F =< L 1 , H F > with a set of intended intelligences ı H F ( L 1 ) of L 1 according to first ontological commitment H , if language L 1 has vocabulary V L 1 , ontological commitment H 1 maps the Data (first form of intelligence) to vocabulary V ı HF ( L 1 ) where V ı H F ( L 1 ) V L 1 , and the view ontological commitment H F maps the Data ( d t 1 ) to vocabulary V μ F , where V μ F V L 1 , then V ı H F ( L 1 ) V μ F .
P r o o f
Based on the premise of a shared assumption regarding intended intelligence, it can be inferred that intended intelligence is encompassed within the context of the first ontological view.
ı H F ( L 1 ) μ F
As shown in the equation 8, the overlapped intended intelligence vocabulary V ı H F ( L 1 ) is composed of constant symbols and predicate symbols. As stated in the equation 5, the first ontological view, the vocabulary V μ F , consists of shared constants and predicate symbols.
Definitions 7.2.3 and 7.2.5 lead us to the conclusion that for each constant c of V ı H F ( L 1 ) C , I 1 ( c ) = μ F ( c ) = ( c ) , but not vice versa because the set of intended intelligences ı H F ( L 1 ) of L according to μ F is consistent with H 1 according to Definition 7.2.6. In other words, V ı H F ( L 1 ) C is a subset of V μ F C :
V ı H F ( L 1 ) C V μ F C
In addition, there is a d t 1 such that, for any overlapped predicate symbol p p of V ı H ( L 1 ) P , I 1 maps such a predicate into an admissible extension of ( p ) , ie, there exists a conceptual relation q c r p such that I 1 ( q c r p ) = μ F ( p ) = q c r ( p ) = p p ( d t 1 ) , although this is not always true for the other direction. Therefore, V ı H F ( L 1 ) P is contained within V μ F P
V ı H F ( L 1 ) P V μ F P
Because V ı H F ( L 1 ) C is a subset of V μ F C ( V ı H F ( L 1 ) C V μ F C ) and V μ F C is a subset of V μ F ( V μ F C V μ F ) , then V ı H F ( L 1 ) C is a subset of V μ F C :
V ı H F ( L 1 ) C V μ F C
Because V ı H F ( L 1 ) P is a subset of V μ F P ( V ı H F ( L 1 ) P V μ F P ) and V μ F P is a subset of V μ F ( V μ F P V μ F ) , then V ı H F ( L 1 ) P is a subset of V μ F P :
V ı H F ( L 1 ) P V μ F P
Therefore, it is reasonable to conclude that ( V ı H F ( L ) = V ı H F ( L ) C , V ı H F ( L ) P is a subset of V μ F P :
V ı H F ( L ) V μ F P
[Theorem 7.3.2] Given a second of intended intelligence ı H S ( L 2 ) of L 2 according to second ontological commitment H S and a second ontological view π s = < L 2 , H S > , if language L has vocabulary V L 2 , ontological commitment H S maps the data d t 2 to vocabulary V ı HS ( L 2 ) , where V ı HS ( L 2 ) V L 2 , and the view ontological commitment H S maps the data ( d t 2 ) to vocabulary V π s , where V π s V L 2 , then V ı HS ( L 2 ) V π s .
P r o o f
Definitions 7.2.3 and 7.2.5 lead us to the conclusion that for each constant c of V ı H S ( L 2 ) C , I 2 ( c ) = π s ( c ) = ( c ) , but not vice versa because the set of intended intelligences ı H S ( L ) of L 2 according to H S is consistent with H 2 according to Definition 7.2.6. In other words, V ı H S ( L 2 ) C is a subset of V π s C :
V ı H S ( L 2 ) C V π s C
In addition, there is a d t 2 such that, for any overlapped predicate symbol p p of V ı H ( L 2 ) P , I 2 maps such a predicate into an admissible extension of ( p ) , ie, there exists a conceptual relation q c r p such that I 2 ( q c r p ) = π s ( p ) = q c r ( p ) = p p ( d t 2 ) , although this is not always true for the other direction. Therefore, V ı H S ( L 2 ) P is contained within V π s P
V ı H S ( L ) P V π s P
Because V ı H S ( L 2 ) C is a subset of V π s C ( V ı H S ( L 2 ) C V π s C ) and V π s C is a subset of V π s ( V π s C V π s ), then V ı H S ( L 2 ) C is a subset of V π s :
V ı H S ( L 2 ) C V π s
Because V ı H S ( L 2 ) P is a subset of V π s C ( V π s P V π s ) P and V π s P is a subset of V π s ( V π s P V π s ) , then V ı H S ( L 2 ) P is a subset of V π s :
V ı H S ( L 2 ) P V π s
Therefore, it is reasonable to conclude that V ı H S ( L 2 ) = V ı H S ( L 2 ) C , V ı H S ( L 2 ) P is a subset of V π s :
V ı H S ( L 2 ) V π s
In Theorems 7.3.1 and 7.3.2, the first and second intended intelligence vocabularies were denoted V ı H F ( L 1 ) and V ı H S ( L 2 ) , respectively. In this section, it is possible to categorize the terminology of intended intelligences into two distinct groups, considering the extent of overlap observed among ontological views. The equations 9 and 10 discuss this classification.
In the above theorems, we examine the correlation between the vocabulary employed by each ontological view and its intended intelligence. Before intersecting with other ontological views, the assumption is that they are completeness and soundness. Completeness ensures that ontology can generate all intended intelligences, whereas soundness guarantees that any model generated by ontology is an intended intelligence. The additional constant and predicate symbols are associated with the vocabulary V μ F e in the first ontological view and V π s e in the second. The constant symbols in the vocabulary V μ F C of the first ontological view consist of the constant symbols in the intended intelligence V ı HF ( L 1 ) C , as well as a set of additional constant symbols that do not belong to the constant symbols in the intended intelligence V μ F C e . These symbols can be identified as:
V μ F C = { V ı HF ( L 1 ) C V μ F C e } = { V ı HF ( L 1 ) C , V μ F C e }
{ V ı HF ( L 1 ) C V μ F C e } =
The predicate symbols of the first ontological view vocabulary V μ F P can now be redefined. This vocabulary consists of a set of extra predicate symbols belonging the ontological view vocabulary V μ F e , as well as a set of predicate symbols from the intended intelligence view V ı H F ( L 1 ) P . The forthcoming definition will be presented in the subsequent way:
V μ F P = { V ı HF ( L 1 ) P V μ F P e } = { V ı HF ( L 1 ) P , V μ F P e }
and
{ V ı HF ( L 1 ) P V μ F P e } =
In the same way, the second ontological view employs two different sets of vocabulary, denoted as V π s C and V π s P , which consist of extra constant and predicate symbols. These symbols are separate from the constant and predicate symbols found in V π s C e and V π s P e , respectively. Furthermore, V π s C and V π s P are supplemented by the constant and predicate symbols belonging to the intended intelligence V ı H S ( L 2 ) C and V ı H S ( L 2 ) P . The formula denoting the constant associated with the vocabulary of the second ontological perspective is:
V π s C = { V ı HS ( L 2 ) C V π s C e } = { V ı HS ( L 2 ) C , V π s C e }
{ V ı HS ( L 2 ) C V π s C e } =
Whereas the expression for the predicate of the second ontological view vocabulary is:
V π s P = { V ı HS ( L 2 ) P V π s P e } = { V ı HS ( L 2 ) P , V π s P e }
and
{ V ı HS ( L 2 ) P V π s P e } =
The set of vocabulary V μ F in the first ontological view can be represented as { V μ F = V μ F C , V μ F P } , using equations 31 and 33.
V μ F = { V μ F C , V μ F P } = { V ı HF ( L 1 ) C , V μ F C e , V ı HF ( L 1 ) P , V μ F P e }
Similarly, for the second ontological view, the vocabulary V π s = { V π s C , V π s P } can be expressed utilizing equations 5.35 and 5.37 in this way:
V π s = { V π s C , V π s P } = { V ı HS ( L 2 ) C , V π s C e , V ı HS ( L 2 ) P , V π s P e }
Before overlapping these ontological views, we performed an examination extending from 5.39 to 5.40 of the interconnections among the vocabulary meanings associated with each ontological view. This study is centred around creating a framework for collaborative intelligence, which necessitates semantic integration using an ontological view.
As a result, examining the partial overlap of these ontological views concerning their intended intelligences is imperative. It is crucial to note that there exists an additional vocabulary derived from the intersection of various ontological views within the first intended intelligence ı HF ( L 1 ) , denoted as V ı HF ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P } , as well as the second intended intelligence ı HS ( L 2 ) , which can be denoted as V ı HS ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P } . The extra vocabulary found in both intended intelligences ( ı HF ( L 1 ) , ı HS ( L 1 ) ) may extend beyond the overlapping area between both intended intelligences. The vocabulary present in the overlapping area of the intended intelligences ( ı HF ( L 1 ) and ı HS ( L 1 ) ) has been denoted as V ı HF ( L 1 ) = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) P } in the first intended intelligence and as V ı HS ( L 2 ) = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) P } in the second intended intelligence. For elucidation, we establish the following definitions for these extra symbols.
[Definition 7.3.1] The extra constant symbols derived from equations 5.10, 5.11, and 5.16 of the first intended intelligence vocabulary, which is located outside the intersected between ontological views, are as follows:
V ı HF ( L 1 ) C = V ı HF ( L 1 ) C V ı HF ( L 1 ) C = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) C }
where
V ı HF ( L 1 ) C V ı HF ( L 1 ) C =
[Definition 7.3.2] The extra predicate symbols derived from equations 8, 9, and 14 of the first intended intelligence vocabulary, which is located outside the intersected between ontological views, are as follows:
V ı HF ( L 1 ) P = V ı HF ( L 1 ) P V ı HF ( L 1 ) P = { V ı HF ( L 1 ) P , V ı HF ( L 1 ) P }
where
V ı HF ( L 1 ) P V ı HF ( L 1 ) P =
[Definition 5.1.6.1.15] The extra constant and predicate symbols of the second intended intelligence vocabulary based on 8, 10, and 17 may be given as:
V ı HS ( L 2 ) C = V ı HS ( L 2 ) C V ı HS ( L 2 ) C = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) C }
where
V ı HS ( L 2 ) C V ı HS ( L 2 ) C =
The extra predicate symbols of the second intended intelligence vocabulary based on 5.10, 5.12, and 5.19:
V ı HS ( L 2 ) P = V ı HS ( L 2 ) P V ı HS ( L ) P = { V ı HS ( L 2 ) P , V ı HS ( L 2 ) P }
where
V ı HS ( L 2 ) P V ı HS ( L 2 ) P =
Based on the partial overlap of intended intelligences, it can be inferred that two different categories of vocabulary exist, as stated in equations 4 and 5. Consequently, the ontological views on vocabulary may be categorized into two different classifications ( V μ F , V π s ):
V μ F = { V μ F C , V μ F P } = { V μ F C , V μ F C , V μ F P , V μ F P }
where
V ı HF ( L 1 ) C V μ F C , V ı HF ( L 1 ) C V μ F C and V ı HF ( L 1 ) P V μ F P , V ı HF ( L 1 ) P V μ F P
A set of V μ F can be expressed in the way that follows:
V μ F = { V ı HF ( L 1 ) C , V ı HF ( L 1 ) C , V ı HF ( L 1 ) P , V ı HF ( L 1 ) P }
V π s = { V π s C , V π s P } = { V π s C , V π s C , V π s P , V π s P }
where
V ı HS ( L 2 ) C V π s C , V ı HS ( L 2 ) C V π s C and V ı HS ( L 2 ) P V π s P , V ı HS ( L 2 ) P V π s P
A set of V π s can be expressed in the way that follows:
V π s = { V ı HS ( L 2 ) C , V ı HS ( L 2 ) C , V ı HS ( L 2 ) P , V ı HS ( L 2 ) P }
Based on the formulas 5.8 and 5.9, the following theorem can be introduced concerning the relationship between V ı HF ( L 1 ) , V ı HS ( L 2 ) and V ı HF ( L 1 ) , V ı HS ( L 2 )
[Theorem 7.3.3] Given two ontological views, μ F and π s , the first ontological view, μ F = < L 1 , H F > , is represented by L 1 with vocabulary V μ F , the set ı HF ( L 1 ) represents the intended intelligences of L 1 according to HF , similarly, the second ontological view, π s = < L 2 , H S > , is represented by L 2 with vocabulary V π s , and ı HF ( L 1 ) represents the intended intelligences of L 2 according to HS , if the language L 1 possesses vocabulary V L 1 , view ontological commitment HF maps some of the D t 2 to V ı HF ( L 1 ) , where V ı HF ( L 1 ) V L 1 , the view ontological commitment HF maps the data D t 1 to the vocabulary V μ F , where V μ F V L 1 , and if language L 2 possesses vocabulary V L 2 , view ontological commitment HS maps some of the D t 1 to V ı HS ( L 2 ) , where V ı HS ( L 2 ) V L 2 , the view ontological commitment HS maps the data D t 2 to the vocabulary V π s , where V π s V L 2 , then there are the shared vocabulary between these intenteded intelligences which is ( V ı HF ( L 1 ) V ı HS ( L 2 ) )⊂( V μ F V π s ).
Proof
Based on the findings gathered from 5.7, 5.8,5.9, 5.49, 5.51, 5.52 and 5.54, it may be concluded that
V μ F V π s
={ V C μ F , V P μ F } { V C π s , V P π s }
= { V C ı HF ( L 1 ) , V C ı HF ( L 1 ) , V P ı HF ( L 1 ) , V P ı HF ( L 1 ) } { V C ı HS ( L 2 ) , V C ı HS ( L 2 ) , V P ı HS ( L 2 ) , V P ı HS ( L 2 ) }
={ V C ı HF ( L 1 ) , V P ı HF ( L 1 ) , V C ı HS ( L 2 ) , V P ı HS ( L 2 ) , Q }
Based on 5.42, 5.44, 5.46, 5.48 and
V C ı HF ( L 1 ) V ı HF ( L 1 ) , V P ı HF ( L 1 ) V ı HF ( L ) , V C ı HS ( L 2 ) V ı HS ( L 2 ) , V P ı HS ( L 2 ) V ı HS ( L 2 )
therefore
={ V ı H ( L 1 ) , V ı HS ( L 2 ) , Q }
where Q represents a set of possible intersections between V C ı HF ( L 1 ) , V P ı HF ( L 1 ) , V C ı HS ( L 2 ) , V P ı HS ( L 2 ) .
then
V ı HF ( L 1 ) , V ı HS ( L 2 ) is, therefore a subset of the intersection of V μ F and V π s .
{ V ı H ( L 1 ) , V ı HS ( L 2 ) } V μ F V π s
[Theorem 7.3.4] Given L 1 and L 2 are logical languages and a first set of intersected intended intelligences ı H 1 ( L 1 ) of L 1 according to H 1 , a first shared ontological view μ F =< L 1 , H F >, where H F H 1 and a second set of intersected intended intelligences ı H 2 ( L 2 ) of L 2 according to H 2 , a second shared ontological view π s =< L 2 , H S > where H S H 2 , If language L 1 has vocabulary V L 1 , ontological commitment ı H 1 maps the data ( D t 1 -first form of intelligence) to vocabulary V ı HF ( L 1 ) where V ı HF ( L 1 ) V ı HF ( L 1 ) and V ı HF ( L 1 ) V L 1 the view ontological commitment L 1 , ı HF ( L 1 ) = < D t 1 , F > maps the data ( D t 2 ) to vocabulary V μ F where V μ F V L 1 , and the view ontological commitment ı HS =< D t 2 , S > maps the data ( D t 1 -first form of intelligence) to vocabulary V ı HS ( L 2 ) where V ı HS ( L 2 ) V ı HS ( L 2 ) and V ı HS ( L 2 ) V L 2 then ı H 1 ( L 1 ) ı H 1 ( L 2 ) μ F π s and ı H 1 ( L 1 ) ı H 1 ( L 2 ) μ F π s .
Proof
According to Definitions 5.1.6.1.4, 5.1.6.1.5, 5.1.6.1.6 and 5.1.6.1.7, the intended intelligence consists of
p x ı H 1 ( L 1 ) : V ı H 2 ( L 2 ) G T 1 , where p x T 1 = { p ( d t 1 ) p x | p x p T 1 , d t 1 D t 1 }
T 1 is a subset of T ( T 1 T ) and represents the conceptual relations involved in the model’s desired ontological commitment.
Because D t 1 are derived from S , which is derived from the same structure state S t 1 = < G , S , T 1 >, T 1 T , equation (5.55) can be deduced as follows:
p x ı H 1 ( L 1 ) : V ı H 2 ( L 2 ) G T 1 , where p x T 1 = { p ( d t 1 ) p x | p x p T 1 , d t 1 S }
and
p x ı H 2 ( L 2 ) : V ı H 1 ( L 1 ) G T 2 , where p x T 2 = { p ( d t 2 ) p x | p x p T 1 , d t 2 D t 2 }
T 2 is a subset of T ( T 2 T ) and represents the conceptual relations involved in the model’s desired ontological commitment.
Because D t 2 are derived from S , which is derived from the same structure state S t 2 = < G , S , T 2 >, T 2 T , equation (5.55) can be deduced as follows:
p x ı H 2 ( L 2 ) : V ı H 1 ( L 1 ) G T 2 , where p x T 2 = { p ( d t 2 ) p x | p x p T 2 , d t 2 S }
As stated in Definition 5.1.6.1.7 , there exists a conceptual relation p , for some predicate symbol p such that p x 1 (p) = p p ( d t 1 ) = ı HF ( P ) . In other words, ı HF ( L ) must be compatible with and map to the set of conceptual relations T . Then (5.56) can be derived further as:
ı HF ( L 1 ) : V ı H 2 ( L 2 ) G T
Also, in Definition 5.1.6.1.7 , there exists a conceptual relation p , for some predicate symbol p such that p x 2 (p) = p p ( d t 2 ) = ı HS ( P ) . In other words, ı HS ( L ) must be compatible with and map to the set of conceptual relations T . Then (5.56) can be derived further as:
ı HS ( L ) : V ı H 1 ( L 1 ) G T
And in accordance with Definitions 5.1.6.1.7 and equation 5.57, we have the following for the first ontological view:
p x F : V ı HS G T ı HF , p x where p x T ı HF T
We can further deduce (5.58) as:
p x F : V ı HS G T
For second ontological view, similarly we can have:
p x S : V ı HF G T ı HS , p x where p x T ı HS T
p x S : V ı HF G T
Since Theorem 5.3 has stated { V ı H ( L 1 ) , V ı HS ( L 2 ) } V μ F V π s , then
ı H F S
H H F H S
and then
ı H ( L 1 ) ı H ( L 2 ) μ F π s
Because ı H ( L 1 ) ı H ( L 1 ) , ı H ( L 2 ) ı H ( L 2 ) and μ F μ F and π s π s
then
ı H ( L 1 ) ı H ( L 2 ) μ F π s
[Proposition 7.3.1] Given data D t 1 = < G , S t 1 , T 1 >, a first ontological view μ F =< L 1 , H F > that commits to the data D t 1 by H F = < D t 1 , F > and data D t 2 = < G , S t 2 , T 2 >, a second ontological view π s =< L 2 , H S > that commits to the data D t 2 by H S = < D t 2 , S >, if the language L 1 has vocabulary V L 1 , the ontological commitment H F = < D t 1 , F > maps the data to vocabulary V F , where V F V L 1 and the language L 2 has vocabulary V L 2 , the ontological commitment H S = < D t 2 , S > maps the data to the vocabulary V S , where V S V L 2 , then the first ontological view μ F is a subset of the second ontological view π s or vice versa if and only if < F , V F > is a subset of < S , V S > or vice versa.
(< F , V F >) ⊆ (< S , V S >) D t 1 D t 2 ( μ F π s ) OR
(< S , V S >) ⊆ (< F , V F >) D t 2 D t 1 ( π s μ F )
[Theorem 7.3.5] Given data D t 1 where D t 1 = < G , S t 1 , T 1 >, μ F =<L, H F > is the first ontological view, represented by the language L 1 with a vocabulary V μ F commits to D t 1 by H F , and given data D t 2 where D t 2 = < G , S t 2 , T 2 >, π s =< L 2 , H S > is the second ontological view, represented by the language L 2 with a vocabulary V π s commits to D t 2 by H S . If μ F and π s both approximate a common intended intelligence ı H ( L ) , where ı H ( L ) ı H ( L 1 ) ı H ( L 2 ) , μ F is a subset of μ F , and μ F = μ F π s , π s is a subset of V π s and π s = μ F π s , then ı H ( L ) is a subset of μ F intersects with π s .
Proof
Theorem 5.4 tells us that thus here we have
ı H 1 ( L 1 ) ı H 1 ( L 2 ) μ F π s where
a common intended intelligence ı H ( L ) ı H 1 ( L 1 ) ı H 1 ( L 2 )
That is to say ı H ( L ) is some common intended intelligence of μ F and π s where ı H ( L ) ı H ( L 1 ) and ı H ( L ) ı H ( L 2 )
We can say
ı H ( L ) μ F π s
Because μ F = μ F π s and π s = μ F π s , also stands if we replace
μ F π s with μ F π s
We can say ı H ( L ) μ F π s
Let ı H ( L 1 ) denote the intended intelligence of μ F , and ı H ( L 2 ) the intended intelligence of π s .
Since μ F and π s both approximate a common intended intelligence ı H ( L ) , then we have:
ı H ( L 1 ) = ı H ( L 2 ) = ı H ( L )
According to Definition 5.1.6.1.9 we have the following:
( μ F Ξ π s )
That is, μ F partially intersected with π s .
[Proposition 7.3.2] Given data D t 1 , the intended intelligence ı H ( L 1 ) , a first ontological view μ F with vocabulary V μ F , where V μ F V L 1 and D t 2 , the intended intelligence ı H ( L 2 ) , a second ontological view π s with vocabulary V π s , where V π s V L 2 if μ F and π s are partially overlap, then there exists an intersection function Σ such that Σ determines the overlap between V μ F and V μ F
( V μ F Ξ V π s ) →∃ Σ ( Σ :( V μ F , V π s , v 1 , v 2 ) ↔ ( v 1 V μ F v 2 V π s )
[Proposition 7.3.3] Given data D t 1 where D t 1 = < G , S t 1 , T 1 >, μ F =< L 1 , H F the language L 1 has a vocabulary V L 1 , H F = < D t 1 , F > is the first ontological view, commits to the data D t 1 by approximating the intended intelligence ı H ( L 1 ) , through the ontological commitment H F = < D t 1 , F > which maps the data D 1 t to the vocabulary V μ F , where V μ F V L 1 and data D t 2 where D t 2 = < G , S t 2 , T 2 > a second ontological view π s =< L 2 , H S > commits to data D t 2 by approximating the intended intelligence H S = < D t 2 , S > through the ontological commitment H S = < D t 2 , S >, which maps the data D t 2 to the vocabulary V π s where V π s V L 1 . There exists an intersection function Σ , that determines the overlaps between some vocabulary in first ontological view V μ F and the second ontological V π s .
( V μ F , V μ F ) →∃ Σ ( Σ :( V μ F , V π s , v 1 , V 2 ) ↔ ( v 1 V μ F v 2 V π s ))
Proof
According to Definition 5.1.6.1.9,
( ( ı H F ( L 1 ) ı H S ( L 2 ) = ) ( ı H F ( L 1 ) ı H S ( L 2 ) ) ) = ı H ( L )
Therefore, we possess
( μ F Ξ π s )
Using Proposition 5.5, we can derive.
Σ :( V μ F , V π s , v 1 , v 2 ) ↔ ( v 1 V μ F v 2 V π s )
It can be stated that there is an intersection function Σ , which serves to ascertain the extent of overlap between the first ontological view μ F and the second ontological view π s .
Until now, the overlap of ontological views μ F and π s has been substantiated. The subsequent subsection will focus on our area of interest, precisely the symmetric difference ( μ F π s ) that exists beyond the area of overlapping ontological views.

7.4. Advancing Beyond the Overlapped Intended Intelligence Concept

The definitions and theorems in the preceding subsection indicate a partial overlap between the first ontological and second views. A partial overlap between these ontological views is essential in the establishment of collaborative intelligence and a corresponding framework. In contrast, a separate area lies outside the intersection of these ontological views, as evidenced by the theorems presented below.
[Theorem 7.4.1] Given data D t 1 where D t 1 = < G , S t 1 , T 1 >, μ F =< L 1 , H F > is the first ontological view, represented by the language L 1 with a vocabulary V μ F commits to D t 1 by H F , and π s =<L, and data D t 2 where D t 2 = < G , S t 2 , T 2 > H S > is the second ontological view, represented by the language L 2 with a vocabulary V π s commits to D t 2 by H S . If μ F and π s both approximate a common intended intelligence ı H ( L ) , and μ F π s , μ F is a subset of μ F , and μ F = μ F - π s , π s is a subset of π s and π s = π s - μ F , then μ F is not intersected with π s .
Proof
According to 5.66, we have ı H ( L ) μ F π s
Because μ F = μ F - π s and
π s = π s - μ F , then μ F π s
= ( μ F π s ) ( π s μ F )
=
Let ı HF f ( L 1 ) denote some intended intelligence of
μ F , and ı HS s ( L 2 ) some intended intelligence of π s . Then the
ı HF f ( L 1 ) μ F
ı HS s ( L 2 ) π s
Then
( ı HF f ( L 1 ) ı HS s ( L 2 ) ) ⊂ μ F ∩⊂ π s
As (60) shows that the intersection between μ F and
π s is empty, then
( ı HF f ( L 1 ) ı HS s ( L 2 ) ) = .
According to the equation 10, we have
μ F π s
This means that μ F is not intersected with π s .
[Proposition 7.4.1] Given data D t 1 where D t 1 = < G , S 1 , T 1 >, μ F =< L 1 , H F > is the first ontological view, represented by the language L 1 with a vocabulary V μ F commits to D t 1 by H F and D t 2 =< G , S 2 , T 2 >, π s =< L 2 , H S > is the second ontological view, represented by the language L 2 with a vocabulary V π s commits to D t 2 by H S . If ı H ( L 1 ) is an approximate common intended intelligence between μ F and π s , ı HF ( L 1 ) is a subset of ı HF ( L 1 ) , and ı HF ( L 1 ) = ı HF ( L 1 ) - ı HF ( L 1 ) , ı HS ( L 2 ) is a subset of ı HS ( L 2 ) , ı HS ( L 2 ) = ı HS ( L 2 ) - ı HS ( L 2 ) . ı HF ( 1 L ) represented by the subset vocabulary of the language V L 1 , denoted by V ı HF ( L 1 ) , commits to D t 1 by H F and ı HS ( L 2 ) represented by the subset vocabulary of the language V L 2 , denoted by V ı HS ( L 2 ) , commits to D t 2 by V ı HS ( L 2 ) , then V ı HF ( L 1 ) is not overlapped with V ı HS ( L 2 ) , V ı HF ( L 1 ) V ı HS ( L 2 ) .
[Theorem 7.4.2] Given D t 1 , where D t 1 = < G , S 1 , T 1 >, μ F =< L 1 , H F > is the first ontological view, represented by the language L 1 with a vocabulary V μ F commits to D t 1 by H F , and D t 2 = < G , S 2 , T 2 >, π s =< L 2 , H S > is the second ontological view, represented by the language L 2 with a vocabulary V π s commits to D t 2 by H S , both intersected and approximate a common intended intelligence ı H ( L ) . If V V μ F V π s and V ¬ V μ F V π s ( V μ F V π s ), V μ F is a subset of V μ F , and V μ F = V μ F - V π s , V π s is a subset of V π s ) and V π s = V π s - V μ F , then because the V is vocabulary that V V μ F , V V π s , it can be said that V is related to both V μ F and V π s due to their shared vocabulary in both V μ F , V π s .
Given:
V μ F ={ V μ F , V }
V π s ={ V π s , V }
Assumption:
There is a relation between the elements V and V μ F within V μ F .
There is a relation between the elements V and V π s within V π s .
Proof
V μ F V π s = { V }
Based on the provided information, it is inferred that there exist relationships between the vocabulary V and V μ F within V μ F , as well as between the V and V π s within V π s .
Since the V is common vocabulary in both V μ F and V π s and since there are relationships between V and both V μ F and V π s we can conclude that there relationships between V μ F and V π s .
Therefore, based on the information provided, it is valid to assert that V is related to V μ F and V π s .
[Theorem 7.4.3] Given a partial intersection of two ontological views ( μ F , π s ), A μ F is axioms of the first ontological view, A π s is axioms of the second ontological view, V A μ F is a component of the vocabulary employed in constructing the sentences within the axioms A μ F , V A π s is a component of the vocabulary employed in constructing the sentences within the axioms A π s . Suppose V is a shared vocabulary between these ontological views, and V is a subset of V A μ F where V A μ F is a proper subset of V ı HF ( L 1 ) , and V ı HF ( L 1 ) is a proper subset of V μ F . Additionally, V is also a subset of V A π s where V A π s is a proper subset of V ı HS ( L 2 ) , V ı HS ( L 2 ) is a proper subset of V π s , it can be inferred that V is a subset of the intersection of V A μ F V A π s .
Proof
Since V ı HF ( L 1 ) V μ F and V ı HS ( L 2 ) V π s based on Theorems 5.1 and 5.2
We can say:
V V ı HF ( L 1 ) V ı HS ( L 2 )
Or V V μ F V π s
There are axioms derived from ontological views ( μ F , π s ) that encompass the vocabulary V.
That is, V V A μ F and V V A π s .
In other words, it can be concluded that the set V is a subset of the intersection of V V μ F and V A π s ,
V V A μ F V A π s .
[Proposition 7.4.2] Considering the partial intersection of two ontological views ( μ F , π s ), V A μ F and V A π s are two sets of axioms which belong to μ F , π s , respectively. It is observed that there exists a shared vocabulary denoted as v. The set v is a subset of V A μ F , V A μ F is a subset of V ı HF ( L 1 ) , where V ı HF ( L 1 ) is a proper subset of V μ F Additionally, v is also a subset of V A π s , V A π s is a subset of V ı HS ( L 2 ) , V ı HS ( L 2 ) is a proper subset of V π s . It can be inferred that v is a subset of the intersection of V A μ F V A π s . Consider the vocabulary v 1 , which is a subset of V A μ F and also a subset of V A μ F - V A π s ( v 1 V A μ F - V A π s ). It is worth noting that v 1 relates to the vocabulary v (v R v 1 ). Similarly, we have the vocabulary v 2 , a subset of V π s . Furthermore, v 2 is a subset of V A π s - V A μ F ( v 2 V A π s - V A μ F ). It is essential to mention that v 2 relates to the vocabulary v (v R v 2 ). A new intelligence ( v 3 ) can be extracted by leveraging the association between v 1 and v 2 via v. It could be argued that in the event of a partial overlap between the sets V A μ F and V A π s , there can be established an extraction function Σ 1 .
( V A μ F Ξ V A π s ) [ v 1 V A μ F v 2 V A π s ] v ( V A μ F V A π s ) Σ 1 ( Σ 1 : ( v 1 , v 2 ) → v 3 )

8. Conclusion

We have provided a comprehensive examination of intelligence and its diverse manifestations in this article, with an emphasis on conceptualization and ontological views. Furthermore, we have explored the concept of collaborative intelligence ( C D I E ), providing insights into fundamental notions, theories, and propositions that aid in our understanding of the emergence of collaborative intelligence in a decentralized environment.
Our research is distinguished by the development of C D I E , which was guided predominantly by our original definition of "descriptive intelligence." In contrast to traditional methodologies that depend on the collective behaviour of individuals, our framework places significant importance on the pivotal function that ontological view-based semantic integration performed in the inception of C D I E .
Through the adoption of this methodology, our comprehension of collaborative intelligence has been significantly broadened, and a novel outlook has been presented on its manifestation within intricate and decentralized environment. As a result, new opportunities arise for investigation and practical implementations within the domains of intelligence and collaboration.

References

  1. Ziegler, M.; Danay, E.; Heene, M.; Asendorpf, J.; Bühner, M. Openness, fluid intelligence, and crystallized intelligence: Toward an integrative model. Journal of Research in Personality 2012, 46, 173–183. [Google Scholar] [CrossRef]
  2. McCarthy, J. From here to human-level AI. Artificial Intelligence 2007, 171, 1174–1182. [Google Scholar] [CrossRef]
  3. Yilam, G.; Kumar, D. Machine Learning Prediction of Human Activity Recognition. Ethiopian Journal of Science and Sustainable Development 2018, 5, 20–33. [Google Scholar]
  4. Wang, Y. On abstract intelligence: Toward a unifying theory of natural, artificial, machinable, and computational intelligence. International Journal of Software Science and Computational Intelligence (IJSSCI) 2009, 1, 1–17. [Google Scholar] [CrossRef]
  5. Faggella, D. What is artificial intelligence? An informed definition. EMERJ. December 2018, 21, 2018. [Google Scholar]
  6. Adhnouss, F.M.A.; El-Asfour, H.M.A.; McIsaac, K.; El-Feghi, I. A Hybrid Approach to Representing Shared Conceptualization in Decentralized AI Systems: Integrating Epistemology, Ontology, and Epistemic Logic. AppliedMath 2023, 3, 601–624. [Google Scholar] [CrossRef]
  7. Guarino, N.; Oberle, D.; Staab, S. What is an ontology? In Handbook on ontologies; Springer; pp. 1–17.
  8. Genesereth, M.R.; Nilsson, N.J. Logical foundations of artificial intelligence; Morgan Kaufmann, 2012. [Google Scholar]
  9. King, J.L. Centralized versus decentralized computing: Organizational considerations and management options. ACM Computing Surveys (CSUR) 1983, 15, 319–349. [Google Scholar] [CrossRef]
  10. Olaru, A.; Pricope, M. Multi-Modal Decentralized Interaction in Multi-Entity Systems. Sensors 2023, 23, 3139. [Google Scholar] [CrossRef] [PubMed]
  11. IBM, I. What is distributed computing. 2022. Available online: https://www.ibm.com/docs/en/txseries/8.2?topic=overview-what-is-distributed-computing (accessed on 02 August 2023).
  12. Tanenbaum, A.S. Distributed systems principles and paradigms; Pearson Prentice Hall, 2007. [Google Scholar]
  13. Smith, D.R. Creation of a Unified Cloud Readiness Assessment Model to Improve Digital Transformation Strategy. International Journal of Data Science and Analysis 2022, 8, 11. [Google Scholar] [CrossRef]
  14. El-Asfour, H.; Adhnouss, F.; McIsaac, K.; Wahaishi, A.; Aburukba, R.; El-Feghia, I. The Nature of Intelligence and Its Forms: An Ontological-Modeling Approach. International Journal of Computer and Information Engineering 2023, 17, 122–131. [Google Scholar]
  15. Gill, Z. User-driven collaborative intelligence: social networks as crowdsourcing ecosystems. In CHI’12 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery, 2012; pp. 161–170. [Google Scholar]
  16. Weschsler, D. Concept of collective intelligence. American Psychologist 1971, 26, 904. [Google Scholar] [CrossRef]
  17. Wheaton, K.J.; Beerbower, M.T. Towards a new definition of intelligence. Stan. L. & Pol’y Rev. 2006, 17, 319. [Google Scholar]
  18. Wechsler, D. The measurement and appraisal of adult intelligence. Academic Medicine 1958, 33, 706. [Google Scholar]
  19. Zhong, H.; Levalle, R.R.; Moghaddam, M.; Nof, S.Y. Collaborative intelligence-definition and measured impacts on internetworked e-work. Management and Production Engineering Review 2015, 6, 67–78. [Google Scholar] [CrossRef]
  20. Woolley, A.W.; Aggarwal, I.; Malone, T.W. Collective intelligence in teams and organizations. Handbook of collective intelligence, 2015; 143–168. [Google Scholar]
  21. Parsons, S.; Branagan, A. Word Aware 1: Teaching Vocabulary Across the Day, Across the Curriculum; Routledge, 2021. [Google Scholar]
  22. Siegler, R.S. The other Alfred Binet. Developmental psychology 1992, 28, 179. [Google Scholar] [CrossRef]
  23. Adhnouss, F.; El-Asfour, H.; McIsaac, K.; Wahaishi, A.M.; El-Feghia, I. An Intensional Conceptualization Model for Ontology-Based Semantic Integration. International Journal of Computer and Information Engineering 2023, 17, 106–111. [Google Scholar]
  24. Wang, Y.D. Ontology-driven semantic transformation for cooperative information systems. PhD thesis, Faculty of Graduate Studies, University of Western Ontario, 2008., 2008. [Google Scholar]
Figure 1. Classification of the perception levels.
Figure 1. Classification of the perception levels.
Preprints 90391 g001
Figure 2. provides an outline of intelligence’s nature.
Figure 2. provides an outline of intelligence’s nature.
Preprints 90391 g002
Figure 7. depicts how collaborative intelligence emerges due to the partial overlap of intended intelligences.
Figure 7. depicts how collaborative intelligence emerges due to the partial overlap of intended intelligences.
Preprints 90391 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated