Paper published in a journal (Scientific congresses, symposiums and conference proceedings)
A Model for Regulating of Ethical Preferences in Machine Ethics
Baniasadi, Zohreh; Parent, Xavier; Max, Charles et al.
2018In Proceedings of International Conference on Human-Computer Interaction, p. 481-506
Peer reviewed
 

Files


Full Text
BANIASADI-HCII2018-preprint.pdf
Author preprint (384.62 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Machine Ethics; AI; Robotics
Disciplines :
Computer science
Author, co-author :
Baniasadi, Zohreh ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Parent, Xavier ;  University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC)
Max, Charles ;  University of Luxembourg > Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE) > Education, Culture, Cognition and Society (ECCS) ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Creamer, Marcos
External co-authors :
no
Language :
English
Title :
A Model for Regulating of Ethical Preferences in Machine Ethics
Publication date :
2018
Event name :
HCI international 2018
Event date :
from 15-07-2018 to 20-07-2018
Audience :
International
Journal title :
Proceedings of International Conference on Human-Computer Interaction
Publisher :
Springer
Special issue title :
Security, Privacy and Ethics in HCI
Pages :
481-506
Peer reviewed :
Peer reviewed
Commentary :
Relying upon machine intelligence with reductions in the supervision of human beings requires us to be able to count on a certain level of ethical behavior from it. Formalizing ethical theories is one of the plausible ways to add ethical dimensions to machines. Rule-based and consequence-based ethical theories are proper candidates for Machine Ethics. It is debatable that methodologies for each ethical theory separately might result in an action that is not always justifiable by human values. This inspires us to combine the reasoning procedure of two ethical theories, deontology and utilitarianism, in a utilitarian-based deontic logic which is an extension of STIT (Seeing To It That) logic. We keep the knowledge domain regarding the methodology in a knowledge base system called IDP. IDP supports inferences to examine and evaluate the process of ethical decision making in our formalization. To validate our proposed methodology we perform a Case Study for some real scenarios in the domain of robotics and automatous agents.
Available on ORBilu :
since 07 September 2018

Statistics


Number of views
277 (48 by Unilu)
Number of downloads
315 (17 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
1
OpenCitations
 
0
WoS citations
 
0

Bibliography


Similar publications



Contact ORBilu