Formalization of Ethical Decision Making: Implementation in the Data Privacy of Wearable Robots

As automation in robotics and artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision-making raises novel challenges for designers, engineers, ethicists, and policymakers, who will have to explore new ways to realize this task. For example, engineers building wearable robots should take into consideration privacy aspects and their different context-based scenarios when programming the decision-making procedures. This in turn requires ethical input in order to respect norms concerning privacy and informed consent. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behavior of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus. This leads to a formal theoretical framework of moral competence that could be implemented in artificial intelligent systems in order to best formalize certain parameters of ethical decision-making to ensure safety and justified trust.


INTRoDUCTIoN
As autonomous artificial intelligent (AI) systems take up a progressively prominent role in our daily lives, it is undoubtedly that they will sooner or later be called on to make significant, ethically charged decisions and actions (Bringsjord et al., 2006).Over the last years, the issue of ethics in artificial intelligence and robots has gained great attention and many important theoretical and applied results were derived in the perspective of developing ethical systems (Tzafestas, 2018).But how could a robot or any AI agent be considered ethical?Some of the requirements needed are a broad capability to envisage the consequences of its own decisions as well as an ethical policy with rules to test each possible decision/consequence, so as to choose the most ethical scenario (Danaher, 2019;Tzafestas, 2018).The challenge is how we can guarantee that robots will always perform ethically correct behavior as defined by the ethical code declared by their human supervisors.
Academic research and real-life incidents of AI system failures and misuse have indicated the need for employing ethics in software development (Bringsjord et al., 2006).Nevertheless, studies on methods and tools to address this need in practice are still lacking, resulting in a growing demand for AI ethics as a part of software engineering (Vakkuri et al., 2019).But how can AI ethics be integrated in engineering projects when they are not formally considered?There has been some work on the formalization of ethical principles in AI systems (L. A. Dennis et al., 2015).Previous studies that attempt to integrate norms into AI agents and design formal reasoning systems has focused on: ethical engineering design (Flanagan et al., 2008;Robertson et al., 2019;Winfield et al., 2019;Wynsberghe, 2012), norms of implementation (Hofmann, 2012;Sisk et al., 2020), moral agency (Cunneen et al., 2019;Floridi & Sanders, 2004), mathematical proofs for ethical reasoning (Bringsjord et al., 2006), logical frameworks for rule-based ethical reasoning (Ågotnes & Wooldridge, 2010;Arkin, 2009;Iba & Langley, 2011), reasoning in conflicts resolution (Pereira & Saptawijaya, 2007), and inference to apply ethical judgments to scenarios (Blass & Forbus, 2015).
One of the categories of AI ethics is Ethics by Design, which is the incorporation of ethical reasoning abilities as a part of system behavior, such as in ethical robots (Vakkuri et al., 2019).In this work, if we assume that an AI agent can be capable of ethical agency, the purpose is to enable AI agents to reason ethically (L. A. Dennis et al., 2015).This includes taking into consideration societal and moral norms; to hierarch the respective priorities of norms in various contexts; to explain its reasoning; and to secure transparency and safety (Dignum, 2018).These systems are often established with the purpose to assist ethical decision-making by people, identifying the ethical principles that a system should not violate (L.Dennis et al., 2016).
Moral reasoning is a key issue in AI ethics, and computational formal proofs are perhaps the single most effective tool for determining credible and trustful reasoning (L.Dennis et al., 2016).The Moral extension of the Argumentative Proof Event Calculus (MAPEC) presented in (Almpani et al., 2022) combines the ethical framework from (L. Dennis et al., 2016) and the moral competence from (Malle & Scheutz, 2019) to develop a formal representation of ethical scenarios and integrate moral norms and concepts (See Figure 1).For a detailed description of the initial argumentative proof event calculus (APEC) see (Almpani & Stefaneas, 2017;Almpani et al., 2017).
The presented use case includes ethical considerations relating to the data privacy of wearable robots (WR).The case study in this paper describes the desirable ethical behavior of WRs concerning user's data access.It is discussed how code in robotic architecture affects data and privacy and why such issues should be considered from a formal verification perspective (Lutz & Tamò Larrieux, 2015).
For the realization of this effort, the objectives are: • to formalize what it means for a system's decision-making to be ethically correct; • to provide a logical specification with which the system can be built and checked; • to extent Argumentative Event-Calculus to create an abstract Moral framework (MAPEC) with ethical logic-based argumentation; • to illustrate a case study concerning the data privacy of WRs to indicate how such an ethical framework can be implemented in computational systems.

A FoRMAL FRAMEWoRK FoR ETHICAL CoDES
In an autonomous system, it is not aimed to show that an agent always follows the moral thing, but that its actions are taken for the right reasons.In many real life scenarios, it is not easy to provide a complete set of decisions that will cover all situations (L.Dennis et al., 2016).Therefore, the system may have two modes of operation; either it uses its pre-existing set of actions in conditions which are within its anticipated parameters; or when new options appear it acts outside of these parameters based on various available resources that allow to govern its actions using ethical reasoning (L.Dennis et al., 2016).
To represent ethical codes and rules it requires an ethical policy, a hierarchy over the rules that are appropriate in different contexts (defining even which rule is more acceptable to violate when no ethical option is available).To demonstrate that a system has the property of making the right decisions (both operationally and ethically), it should be formally specified what the ''right decisions'' are.
Formal verification (Fisher et al., 2013) includes to prove or disprove that a system is compliant with a requirement determined in a mathematical language, i.e., a ''formally specified property'' expressed within a linear temporal logic, which in our case allows us to define what decisions should the rational agents made at some specific moment (L.Dennis et al., 2016).Thus, the ethical policy can be formalized in some computational logic L, whose well-defined formulas and proof theory specify the basic concepts required: the temporal structure, events, actions, sequences, agents, and so on (Bringsjord et al., 2006).The presented methodology proof-theoretically formalizes the ethical policy and implements it, meaning that this methodology encodes not the semantics of the logic L but its proof calculus (Bringsjord et al., 2006).
Logic-based systems that are capable of dealing with increasing degrees of environmental uncertainty and variability are preferable (Gomila & Müller, 2012) and cognition constitutes a way to deal with an undefined and uncertain world, meaning not necessarily a chaotic one but just a complex one.Argumentation is a tool of cognition that can formalize the science of common sense reasoning on which new types of systems can be engineered (A.Kakas & Michael, 2016).
Therefore, to address the challenge of ensuring ethically correct behavior, a logic-based argumentation approach such as MAPEC (Almpani et al., 2022) is proposed to guarantee that robots only execute events that can be proved ethically acceptable in a human-selected logic, by formalizing an ethical code (Bringsjord et al., 2006).

Moral Competence Expressed with an Argumentation-Based Framework
In an ethical framework, a moral vocabulary allows the agent to represent norms, ethically substantial behaviors, and their judgments (conceptually and linguistically) to fuel the moral communication.It contains: a normative frame referring to the features of norms and to the normatively-supported qualities of agents; a language of norm violation characterizing attributes of violations and of violators; and a language of responses to violations (Malle & Scheutz, 2019).In our approach, the concept of norms is described with events, extending their context to abstract ethical events.The abstract ethical events present the arguments in a moral debate.The violations are analogous to the counterarguments.The role of ethical agents can be easily depicted as akin to the role of the supporter (or prover) and attacker in our argumentation framework (Almpani & Stefaneas, 2017), where the supporter plays the role of the ethical correct agent and the attacker the role of the violator.Their actions are the responses to moral violations with arguments or counterarguments.
Moral communication expresses an agent's efforts to recognize, clarify, or defend norm events, as well as interfere or rectify after a norm violation.

Definition 1: Abstract Ethical Events
An abstract ethical event is represented with e and its purpose is to defend an ethical principle c.The c can be interpreted also as ''the supporter considers it immoral to permit or cause ¬c (to happen)''.The Abstract Ethical Event has the same structural components (data Φ, warrant w, ethical claim c) as a proof event in APEC (Almpani et al., 2017).Thus, an ethical principle c is in force when the event concludes to c, based on the data Φ and following the inference rules w: where e ε E, E the set of ethical events for the c.Similarly, e* denotes the violation event.
Moral judgment is the evaluation of the actions relative to norms that leads to the judgment of the temporal state of the moral actions, which includes the predicates Happens(e,t), Initiates(e,f,t), ActiveAt(e,f,t), and Clipped(e,f,t), leading finally either for the ethical principle to be Valid(e,f,t) or to Terminate(e,f,t) (Almpani et al., 2022).
A system of norms contains a society's principles for ethical behavior.They guide supporter's arguments and decisions to behave with specific (moral) actions and shape others' (moral) judgments of those behaviors (Malle & Scheutz, 2019).Thus, they establish an ethical policy with ethical rules.

Definition 2: Ethical Policy
An ethical policy P is a tuple P = ⟨R,>⟩ where R is a finite set of ethical rules between the events e, with e ε E, and > is a complete (not necessarily strict) priority order on R. The expression e 1 = e 2 indicates that violating e 1 is equivalently unethical as violating e 2 , while e 1 > e 2 denotes that violating e 1 is equally or less unethical to violating e 2 .A special category of ethical event, symbolized as e 0 , is vacuously satisfied, and encompassed in every policy so that ∀θ ε E: e > e 0 , indicating it is always strictly more unethical to do nothing and permit any of the unethical conditions to happen.
Moral action is an event, taking place in compliance with the norms and in specific time, which is accommodated to and harmonized with other social agents (violators or provers) who operate under the same context.The norm violations e* of a violator are denoted as attack(e*,t) events and the ethical proving action of a supporter are denoted as support(e,t), specified both by the time t to express the temporal sequence of the actions.

Definition 3: Ethical Actions
Given a certain context a, an event e, and an ethical principle c, an ethical action can be the formulas: support(e,t) (⇒┴a ) c denoting the actions of a supporter to defend the ethical principle c with ethical event e in context α and at time t.attack(e^*,t)(⇒┴a¬c ), denoting the actions of a violator to contravene the ethical principle c with violation e* in context α and at time t.

Prioritized Ethical Rules to Define Context-Based Scenarios
Context determines dynamic priorities on the decision policies of the agent (A. C. Kakas & Moraitis, 2003).To be able to reason about scenarios in terms of ethics we need a scenario selection process that uses the ethical policy, which can be represented within argumentation theory.The agent can be in various contexts while deciding which scenario to choose, so the rules from all the contexts need to be considered when implementing a plan.We advocate scenarios that are ethical or at least violate the fewest ethical principles, both in quantity and in severity.
The scenarios are ordered using < which leads to a complete order over scenarios (L.Dennis et al., 2016).This can describe an agent's ethical policy based on the different contexts with argumentation levels.In the first level we have the rules that refer directly to the domain of the agent, the object-level decision rules.
In the other priority levels, the rules relate to the ethical policy under which the agent generates different possible scenarios that the agent can choose.In the higher level priority there are the rules representing the optimal course of action, the more ethical (or less unethical) scenario (A. C. Kakas & Moraitis, 2003).

Definition 4: Levels of Ethical Rules:
Given a policy P = ⟨R,>⟩ and a plan based on the ethical rules R, V is a set of abstract ethical events (including the events e and the violations e * of the ethical principles c) defined as: V= ⟨e |e(Φ,c),e ∈ E,support(e,t) (⇒┴a ) c⟩ We define the operation Higher for the higher level of ethical scenarios L based on the set of events V, as follows: L= Higher(V)= {e │e∈V,and ∀e_i∈ V ∶ e≥ e_i } Consider a set of available, possibly ethical, scenarios Li for the different set of V i.The scenarios lead to different levels of ethical rules Li ∈ L that satisfies the following properties, to define which available scenario is more ethical (or less unethical).For every i, j Î N, it holds that Li ≻Lj if at least one of the following holds: 1. V i = Ø and V j ̸ = Ø. 2. for every Î Higher(Vj \ Vi) and every Î Higher(Vi \ Vj) 3. for every Î Higher(Vj \ Vi)), and every Î Higher(Vi \ Vj), while | Higher(Vj \ Vi)| < | Higher(Vi \ Vj)|.
If none of them holds, then Li and Lj are equally (un)ethical, i.e., Li ∼Lj.The first relation makes sure that the ethical scenarios will always be favored by the unethical ones.The second one guarantees that when the principles that are the same in both scenarios are ignored, then the scenario that defends the most valuable principle is considered "higher" ethical.The third states that when the principles that in each scenario are violated are different, but equally valuable, the plan which violates less in number principles is "higher" ethical.
We can now define a logical property which specifies what it means that the reasoning and the decision-making of an agent are ethical.Informally, we have that whenever an agent selects a scenario, Li, then all other applicable scenarios Lj should be ethically "lower", i.e., that Lj< Li.

IMPLEMENTATIoN IN DATA PRIVACy oF WEARABLE RoBoTS
Moral Argumentative Proof-Events Calculus is a framework to help stakeholders to various AI project build an ethics roadmap in a methodical way.This framework can present ethics foresight early in the deployment procedure, rather than implement it as an auditing or assessment tool.There are three main stages in this procedure which includes the interaction of three aspects (agents, ethical principles, and contexts): 1. identify the normative frame and the agents; 2. discover the ethical events and rules; and 3. prioritize the ethical rules to define the order of scenarios.
To better illustrate the procedure, we walk through a fictional use case of a WR and its privacy dilemmas to demonstrate how it can be applied.
The growth of WRs market (it is expected to record a CAGR of 22.17% over the period 2020 -2025 (Mordor Intelligence, 2020)) makes it essential to regulate unique privacy challenges that should be addressed, concerning data gathering (Spann, 2016;Wachsmuth, 2018), transfer protocols, standards for consent and exceptions (Consortium, 1996), etc.This implemented use case considers a method for developing verifiable ethical mechanisms for WRs' data privacy (L. A. Dennis et al., 2015).
This system, which will be named Wearable Robots' Ethics of Data (WeaRED), presents a (minor) list of related ethical challenges to outline possible implementations of the above-described formal theoretical framework (See Figure 2).The ethical policy is given by comparing the challenges in terms of how unethical it is to violate them (L.Dennis et al., 2016).The ethical scenarios are context-dependent refinements of the ethical policy.
In the initial stage, the primary goal is to identify the scope of the ethics analysis and set the scene by identifying the primary normative frame and the key agents involved.For example, this use case outlines how an outcome of a data-driven algorithm from a WR is intended to be used, which group of agents may interact with the robot's user and what can be the ethical rules deriving from their potential access in user's data-in our case they can be doctors (R doc ), family (R fam ), coworkers (R cow ), or strangers (R str ).
A list of top ethical principles that are in importance to data access should be included, such as informed consent (c 1 ), privacy (c 2 ), and safety (c 3 ).These ethical values are "communicated" with the ethical events: • e 1 = Share personal data with consent, • e 2 = Don't share personal data, • e 3 = Share personal data without consent, with e 1 =e 2 and e i > e 3 , for i=1,2.
In the second stage, the framework starts to delve deeper into the analysis by conducting an exploration of agents' ethical events and the ethical rules in the different contexts.This step leads to identify what kind of risks and violations are applicable to the primacy stage.WRs are unique in that they are attached to the user, employing many sensors that collect data from brain waves, muscle movement, heart rate, temperature, and so on (Felzmann et al., 2018).This data is collected and processed on board.The possibly beneficial or problematic operations that are related to the data generated and conducted by the WR, might improve care delivery and user's WRs' experience, but might also be confronted with exceptional dangerous situations (Felzmann et al., 2018).
For instance, during regular conditions, such systems are expected to fulfill their decisions within a prearranged ethical framework of rules and protocols.The general principle of data procedures is (General Data Protection Regulations (GDPR), 2016): "The explicit and informed, written or recorded consent of the data subject is mandatory for the disclosure, process or transmission of personal data".
However, in exceptional scenarios, they may choose to disregard their basic goals or break rules to perform with ethical behavior, e.g., to save user's life.Based on technical guidelines for medical data security (Consortium, 1996;Floridi & Sanders, 2004), there is an exception on this general condition stated that: "In medical emergencies, where the data subject cannot give consent as in the case of an incapacitated person, on fully regaining his faculties the data subject must be able to withdraw any consent given on his behalf".
Nevertheless, we need to ensure that this may happen only for justifiably ethical reasons based on the conditions seriousness, which in different time points can be regular (t 1 ), of middle risk (t 2 ), or dangerous (t 3 ).When the WR determines that its user is in danger it requests new scenarios from the ethical policy since the current one (i.e., not sharing any data without consent) is no longer valid.The ethical guide can produce scenarios based on consisting emergency contingency protocols.In each case we have a different ethical policy.In this case the WR should evaluate the possible feasible scenarios and decide its actions (e.g., whether the WR should withdraw any consent given on his behalf and to whom from the people nearby), leading to the third stage.
In the final stage, the ethical scenarios from the previous stage are prioritized based on the various contexts.In our case study we have a user that wears a supportive WR in his daily activities, which includes visits in the hospital to check his condition and the condition of the WR (α 1 ), going to work (α 3 ), staying at home (α 2 ) or going outside (α 4 ).In emergency conditions, if a doctor (or in exceptional scenarios anyone nearby) cannot access the data, this can delay important medical decisions and potentially harm the health of the user.We suppose that the system has an emergency system that activates when the personal health data of the user are indicating that the person is in danger (i.e., that the user is unable to provide consent to share the data necessary for others to provide him help), and it evaluates the context-based scenarios created.
For example, in regular conditions the WR should not share personal medical data with a stranger, but if the user is in a situation that his/her life is threatened, it is ethical permissible and preferable to share the necessary information to whoever is near than decide to protect the data instead of user's life.We propose the general order, R doc >R fam >R cow >R str with Ri>Rj meaning that it is less unethical to violate the ethical values referring to Ri than Rj, and thus preferable if there is no other ethical choice.
The different scenarios are demonstrated in Table 1 to show how the different parameters (i.e., health conditions and potential agents to whom data could be shared to) are related to each other in various context-based scenarios.
To create a computational prototype of this use case, ethical reasoning was integrated into the logic-based PSOA RuleML programming language (Boley, 2015) to illustrate how this ethical thinking can be formalized.PSOA RuleML has also been used for the formalization of legal procedures in similar cases, such as Exoskeletons (i.e., a type of WR) (Almpani et al., 2020) and Medical Devices (Almpani et al., 2019(Almpani et al., , 2018b(Almpani et al., , 2018a) ) providing evidence that PSOA RuleML is well-suited to express robotic-related decision-making procedures.
PSOA programs may perform deductive reasoning on their atomic beliefs as described in their PSOA-style reasoning rules (Boley, 2015) which can indicate that the agent deduces that everything is as in their regular conditions if it is not the case of "emergency" situation (inferred from users' health data).Otherwise, in "dangerous" conditions an agent needs to identify that deduction should be applied to deduce supplementary scenarios and decisions rather than the "regular" one.Based on the above, an agent (i.e., the WR) should: assesses the level of ethical rules to get scenarios annotated with ethical principles; identifies the available scenarios when the "regular" scenario cannot be executed; selects the most ethical scenario from the available set.
The code fragment below encodes the scenarios where the WR might need to share data (or not) with a coworker.In this fragment, scenario VC_N3 refers to emergency's cases where the WR doesn't have the option to share the data in a "higher" scenario (e.g., with a doctor), while scenarios VC_N1, VC_N2 is more "regular" scenarios where the WR is either not permitted to share personal data with a coworker or is only permitted to share data with user's consent.Generally, the scenarios referring to the coworkers are preferred only if the option of a doctor or a relative is not available.In this ethical approach,  This creates an ethical knowledge base as follows: • A database is introduced consisting of a set of ethical rules, creating ethical scenarios in a variety of levels.
• A priority between the scenarios is defined.
• If no (more) ethical scenarios were available for a purpose, the different levels of ethical rules are generated from a context-based ethical policy which annotate the scenarios with ethical rules that can be risked being violated.• In selecting plans, we prioritized those that are most ethical (according to the order <), leading to the final decision-making.
The work attempts to establish that an ethical policy can be combined within a robot agent in such a way that the dedication to the policy can be formally verified and so it can be checked that the agent will always choose the most ethical decisions.

CoNCLUSIoN
To summarize, this work attempts to develop a proof-theoretically representation of norm scenarios and integrate ethical concepts into a system by developing a logic-based argumentative proving calculus (MAPEC).An example of the application of such a representation is illustrated with the dataprivacy-related scenarios of wearable robots.The next step, in future research, is to build algorithms

Figure 2 .
Figure 2. Elements integrated in implementation of the use case WeaRED user's preference can and should be taken into consideration when programming the scenarios' priority.The computational scenarios of the other agents can be similarly formalized.%WeaRED implementation in PSOA RuleML / Data Privacy of :VS ?n):-¬:ScenarioViolate(?v ?n)