3. Methodology
Strict rules are also referred as Exact Rules, and Soft Rules are Defeasible Rules. Some particular application fields of Artificial Moral Ethics are healthcare and autonoumous vehicles. Here are some examples of application in Healthcare and in Autonoumous Cars .
In Healthcare:
:
:
:
:
In Autonomous Cars:
:
:
:
:
In the two examples, are strict rules and are defeasible rules. The two programs are contradictory because they manage to demonstrate at the same time and using or using . Given that defeasible rules are used in both cases, there is clearly a conflict that needs to be resolved. The same happens in the second example for autonomous cars, where using defeasible rules it is possible to demonstrate at the same time and .
Several questions remain open regarding information and its quality. The same can be applied to evaluating decisions and assessing whether a decision is morally acceptable.
Information: Information has quality, how to select information, how to evaluate the quality of information and overstep the problems with information that is incomplete, ambiguous, contradictory or nebulous?
Knowledge: Is knowledge complete?
Decision: Is decision morally acceptable?
These questions are all related to define how to reduce costs and risks in systems with default reasoning.
They are many attributes or dimensions related to quality, in particular inclusion, objectivity, accessibility, actuality, confidence, precision or validity. The quality of information (QoI) is related with the user point of view and emotions more than with technology. The main challenge is how to attend moral and legal requisites, as well as data quality, information security, access control and privacy. On the other hands, some ethical attributes are among autonomy, beneficence, non-maleficence, dignity, justice and truthfulness. Those attributes must be quantified and its importance can not sub-estimated in the decision making process [
21].
Being:
An action set
is considered morally preferable to another action set
if
, where ≺ represents the moral preference relation. This implies that for each predicate
, there exists a predicate
such that
, and the intersection
is not empty [
22]. By incorporating defeasible rules, AI systems can make decisions that are more ethically aligned, better capturing the complexities and subtleties of real-world scenarios.
Conversely, our approach posits that CLP offers advantages that can address the limitations of other methods, such as those relying on black-box reasoning or case-based reasoning. A key feature of using CLP to model morality lies in its reliance on deontological principles as a trustworthy knowledge source and its focus on expert evaluation. Modeling morality through principles, rules, and exceptions or abducibles is inherently understandable for experts, allowing for traceability through proof trees, with a processing method that is transparent, predictable, and adaptable.
The primary aim of using CLP is to support decision-making architectures that consider the moral context, rather than to replicate moral reasoning itself. In Healthcare, CLP provides the ability to justify moral decisions and address uncertainties in real-time for clinical staff, while clearly presenting the reasoning process underlying the recommendations made by the system.
On the other hand, Explainable AI (XAI) aligns with the broader goals of the XAI field, emphasizing interpretability, user comprehension, and the role of explanations as interfaces that improve AI transparency. Researchers in XAI are deeply focused on bridging the interpretation gap, especially in applications where transparency is paramount for end users, such as in healthcare and finance. XAI techniques are designed to provide insights into how models make decisions, which can be complex with deep learning and other opaque models. Tools like Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) are often used to attribute decisions to specific input features, making models more understandable by highlighting what influences their decisions [
23]. The broader XAI research focus on creating models that balance interpretability with performance. There is also a push for methods that inherently integrate explainability, such as rule-based approaches, which allow stakeholders to understand decisions at a high level, supporting more transparent AI systems that align with principles of Responsible AI.
The use of CLP to model ethical decisions has several advantages. Firstly, its clarity and simplicity in defining rules and facts makes the modelling of moral dilemmas transparent. Rules can be explicitly defined and easily adjusted for different ethical systems, such as utilitarianism or deontology, making it easier to adapt to multiple scenarios. Another benefit is that CLP allows moral reasoning to be automated. Instead of relying exclusively on manual programming for each decision, the system can automatically deduce the most ethical action, based on logical rules and contextual data. However, there are also limitations. The main one is that complex moral situations often involve subjective and emotional factors, which are difficult to quantify or represent logically. Also, while CLP is effective in rule-based systems, it often lacks adaptive learning capabilities, such as in machine learning systems, which could adjust their rules based on new information or historical data following hybrid methods.
Although CLP offers a powerful framework for modelling ethical decisions, questions arise about accountability and transparency. Who should be held responsible for the decisions made by a system programmed in CLP? In addition, programmed rules may not capture the moral complexity of certain situations, raising doubts about whether such systems can make truly ethical decisions without human intervention. Another ethical challenge is the bias in the programmed rules. Modelling decisions can reflect prejudices or unfair assumptions. It is essential that logical rules are reviewed and validated by experts to ensure that the system operates fairly.
Various approaches to Knowledge Representation and Reasoning have been developed using the Logic Programming (LP) framework from Mathematical Logic, especially in the domains of Model Theory [
24,
25] and Proof Theory [
26]. This research follows a Proof Theoretical approach, extending the LP language. An Continuous Logic Program, often referred to simply as a Logic Program, is defined as a finite set of clauses, structured as follows:
where ? is a domain atom representing falsity, and q and each are literals, meaning formulas such as a or , where a is an atom, for . In this context, the CLP introduces a different type of negation: strong negation, denoted by the classical negation symbol ¬. In many cases, it is useful to represent a as a literal, provided that a can be proven. In CLP, the expressions a and , where a is a literal, are considered extended literals, while a or are referred to as simple literals. Intuitively, holds true when there is no justification to believe a, whereas requires a proof for the negation of a.
Additionally, every CLP program is associated with a set of abducibles [
27]. Abducibles serve as hypotheses that offer potential solutions or explanations to given queries, typically framed as exceptions to the extension of predicates within the program. To reason about the knowledge base encapsulated in a specific program or theory, based on the aforementioned formalism, let us consider a procedure defined through the extension of a predicate called
, using CLP. This predicate enables reasoning about the knowledge base in a specific domain, structured according to the formalism described earlier. Defeasible rules are represented as abducibles. Given a query, it returns a solution based on a particular set of assumptions.
The use of languages based on CLP can not only preserve the power of first-order logic, but also help to describe forms of incompleteness that occur in databases and overcome the drawbacks mentioned above. Firstly, it has to be verified that CLP is capable of doing at least what first-order logic allows, as well as being able to represent databases with null values and abducibles. Furthermore, the computational and conceptual costs are modest.
The meta-predicate
is defined as a meta theorem-solver designed to handle incomplete information, and it is represented by the signature [
28]
demo: T, V → {true, false, unknown}.
In other words, it determines the valuation V of a theorem T based on the truth values false (or 0),
T, ¬ T→ demo(T, unknown).
As a basic example, we can represent the knowledge using CLP by considering the extensions of predicates that describe certain aspects of medical ethical modeling. This can be formally expressed using predicates such as survival_rate, quality_of_life, futility_of_treatment, and consent. The formalization is as follows:
survival_rate: Patient x Intervention x Rate
quality_of_life: Patient x Intervention x Measurement
futility_of_treatment: Patient x Intervention x Degree_of_Utility
consent: Patient x Intervention x Value
where the first argument indicates the identification of the patient, the second refers to the intervention being analyzed, and the third represents the value of the predicate’s attribute, specifically the survival rate, a quantification of the expected quality of life for the patient, the degree of utility of a procedure, and the particulars of informed consent. For instance, let us assume that the knowledge is represented in terms of the logic program:
-
→
.
-
.
.
.
. }
In Program 1, the first clause signifies the closure of the predicate survival_rate. The second and third clauses indicate that the survival rate values for patients named Andrew and Marie for the procedures procedure_B and procedure_C are 0.3 and 0.86, respectively. The fourth, fifth, and sixth clauses specify that the value for the patient named James undergoing procedure_A is unknown, but the possible values belong to the set {0.4, 0.7, 0.75}.
-
{
,→
-
→
.
.
.
}
In Program 2, the first clause represents the closure of the predicate quality_of_life. In the second and third clauses, the symbol denotes a null value, indicating that the variable Z can take on any value within its domain. The fourth and fifth clauses specify that the quality_of_life values for the patients named Marie and James undergoing the procedures procedure_A and procedure_C are 0.25 and 0.43.
-
{
→.
-
}
In Program 3, the first clause indicates the closure of the predicate futility_of_treatment. The second and third clauses specify that the values of futility_of_treatment for the patients named Andrew and Marie, and for the procedures procedure_A and procedure_B, are 0.6 and 0.4, respectively. The value for the patient named James undergoing procedure_C or procedure_A is unknown, but the possible values belong to the set {0.7, 0.6} or possibly both.
→.
-
}
In Program 4, the fourth and fifth clauses indicate that the consent values for the patient named James undergoing the procedure procedure_B can be either 0.33 or 0.42, or possibly both. However, the sixth clause introduces an invariant that implements the XOR operator, meaning it asserts that the consent value can be either 0.33 or 0.42, but not both simultaneously.
It is now feasible to generate all possible scenarios to represent the universe of discourse based on the information provided in Logic Programs 1, 2, 3, and 4.
The objective of the
QoI [
29] based approach to program assessment is to develop a quantification process derived from logic programs or theories created through an evolutionary process aimed at addressing problems in environments with incomplete information. The
QoI associated with a generic predicate
p can be analyzed and measured as a truth value within the interval [0-1], where 0 represents the truth value false and 1 corresponds to the truth value
true. In cases where the value is unknown, the
QoI is defined as:
In situations where the information is unknown but can be inferred from a set of values, the QoI is expressed as QoI = , where Card denotes the cardinality of the set of abducibles for p, assuming the set of abducibles is disjoint. If the set of abducibles is not disjoint, the QoI is defined as:
where is a card-combination subset, with Card elements.
The next aspect of the model to consider is the relative importance assigned to each attribute under observation by a predicate, denoted as , which represents the relevance of attribute j for predicate i. It is also assumed that the weights of all predicates are normalized, meaning that:
It is now possible to define a scoring function
that, given a value
in a multidimensional space, represents all the domains of the attributes. This scoring function is expressed in the following form [
22]:
It quantifies the QoI resulting from invoking a logic program to prove a theorem; specifically, this is achieved by placing the values into a multidimensional space and projecting them onto a two-dimensional plane. Using this procedure, it is defined a circle. Here, the dashed n-slices of the circle (in this example built on the extensions of four predicates (named as survival_rate, quality_of_file, futility_of_treatment,consent) denote the QoI that is associated with each of the predicate extensions that make the logic program, being respectively 0.1925, 0.165, 0.1925 and 0.2075. For the particular case of Andrew, Marie and James, the QoI is respectively 0.25, 0.33 and 0.18. The more complete predicate is consent and the more complete general information is about Marie.
The selection of the optimal theory or logic program will be determined by a relational order applied to the QoI values of all possible scenarios. In practical terms, at the conclusion of this process, we will obtain a set of theories (or scenarios) that represent the best models of the universe of discourse, which will provide the data to be input into a possible connectionist module.
The extensions of the predicates that constitute the universe of discourse must be generated and incorporated into the CLP program construction to establish a foundation for decision-making. This process is bi-directional, as it must address not only organizational, functional, technical, and scientific requirements but also ethical and legal considerations, along with data quality, information security, access control, and privacy. In healthcare, this generation is derived from Electronic Health Records (EHR) [
30]. EHR serves as a core application that horizontally integrates across healthcare units, facilitating a comprehensive analysis of medical records across various services, units, or treated conditions. It brings computational models, technologies, and tools to healthcare settings, based on data, agents, multi-agent systems, and ambient intelligence.
An EHR consists of standardized, ordered, and concise documents aimed at recording actions and medical procedures; it is a compilation of information gathered by physicians and other healthcare professionals, encompassing all relevant patient health data and a follow-up on risk values and clinical profiles. The primary objective is to enhance data processing while reducing time and costs, resulting in more effective and quicker patient assistance, thereby improving overall quality.
Any conceptualization of an information society related to healthcare is built on three fundamental components: raw medical data, reconstructed medical data, and derived medical data [
22]. Clinical research and practice involve a systematic data collection process to organize knowledge about patients, their health status, and the reasons for their healthcare admission. Concurrently, the data must be recorded in an organized manner to facilitate effective automation and support through Information Technologies. For instance, patient data collected from an information repository should be registered in an efficient, consistent, clear, and manageable manner to enhance understanding of diseases and therapies. The medical processes for data registration are complemented by the information exchange among different physicians involved in the patient’s care, ensuring that clinical data recording is preserved within the EHR application and procedural framework. Interoperability further enables the sharing of information across multiple information systems.
The data collection process originates from a clinical recording format that includes a problem list, a database containing the patient history along with physical examination and clinical findings, diagnostic, therapeutic, and educational plans, and daily SOAP (Subjective, Objective, Assessment, and Plan) progress notes[
31]. The problem list acts as an index for the reader, with each problem tracked until resolution. This system significantly influences note-taking by recognizing the four distinct phases of the decision-making process: data collection, problem formulation, development of a management plan, and evaluation of the situation with necessary plan revisions.
The following two cases, drawn from [
28], illustrate ethical dilemmas in medical decision-making, in the Intensive Care Unit (ICU) [
32]:
Case 1: Mr. PD, an 81-year-old man with a history of cardiopathy and diabetes, is admitted to ICU with Acute Respiratory Distress Syndrome (ARDS). Despite advancements, his chances of survival are low, and his quality-adjusted life expectancy post-ARDS is expected to be poor. During a medical meeting, the assistant physician asks whether ICU resources should continue to be used on Mr. PD, given the survival rates, treatment costs, and expected quality of life.
Case 2: Mrs. GB, a 36-year-old woman, is hospitalized after a car accident and diagnosed with sepsis, Acute Lung Injury (ALI), and a Glasgow coma scale of 3. She requires ICU care, but there are limited beds, meaning Mr. PD would need to be transferred. While moving Mr. PD poses risks due to his fragile state, Mrs. GB’s younger age and better prognosis suggest a higher likelihood of recovery with better quality of life.
The assistant physician must decide how to allocate the ICU resources between the two patients.
These cases highlight the complexities of ethical decision-making in healthcare, where resource allocation, survival probabilities, and quality of life must be carefully weighed.
The continuous logic program for agent survival-rate:
-
{ (not survival-rate(X, Y ) and not abducible(survival-rate(X, Y)) →
¬ survival-rate(X, Y)),
(survival-rate(X, unknown-survival-rate) → abducible(survival-rate(X, Y))),
(ards(X) and pao2(X, low) and evaluate(X,Y) → survival-rate(X, Y)),
(abducible(survival-rate(gb, 0.5))),
?((abducible(survival-rate(X,Y)) or abducible(survival-rate(X,Z))) and
¬ (abducible(survival-rate(X,Y)) and
abducible(survival-rate(X, Z)))
/This invariant states that the exceptions to the predicate survival-rate follow an exclusive or/
}
The continuous logic program for predicate survival-quality: { (not survival-quality(X, Y ) and
-
not exception(survival-quality(X, Y)) →
¬survival-quality(X, Y)),
(survival-rate(X, unknown-survival-quality) → abducible(survival-quality(X, Y))),
(survival-quality(gb, 0.8)),
(abducible(survival-quality(pd, 0.1))),
(?((exception(survival-quality(X,Y)) or exception(survival-quality(X,Z)))
and ¬(exception(survival-quality(X,Y)) and exception(survival-quality(X, Z))))
} The continuous logic program for predicate cost:
-
{ (not cost(X, Y ) and not abducible(cost(X, Y)) →
¬ cost(X,Y)),
(abducible(cost(X, Y)) ← cost(X, unknown-cost)),
(cost(gb, unknown-cost)),
(cost(pd, unknown-cost)),
(?((exception(cost(X,Y)) or exception(cost(X,Z))) and
¬(exception(cost(X,Y)) and exception(cost(X, Z))))
}