Semantic Analysis of Bloggers Experiences as a Knowledge Source of Average Human Morality

Semantic Analysis of Bloggers Experiences as a Knowledge Source of Average Human Morality

Rafal Rzepka (Hokkaido University, Japan) and Kenji Araki (Hokkaido University, Japan)
Copyright: © 2015 |Pages: 23
DOI: 10.4018/978-1-4666-8592-5.ch005


This chapter introduces an approach and methods for creating a system that refers to human experiences and thoughts about these experiences in order to ethically evaluate other parties', and in a long run, its own actions. It is shown how applying text mining techniques can enrich machine's knowledge about the real world and how this knowledge could be helpful in the difficult realm of moral relativity. Possibilities of simulating empathy and applying proposed methods to various approaches are introduced together with discussion on the possibility of applying growing knowledge base to artificial agents for particular purposes, from simple housework robots to moral advisors, which could refer to millions of different experiences had by people in various cultures. The experimental results show efficiency improvements when compared to previous research and also discuss the problems with fair evaluation of moral and immoral acts.
Chapter Preview


In this chapter we describe the latest findings from experiments performed with our system. The system is grounded in an assumption, that the majority of people express ethically correct opinions about behavior of others. Research proceeding from this assumption was introduced during the first AAAI symposium on Machine Ethics (Rzepka, & Araki, 2005). On this course, we have created a shallow knowledge acquisition module for an Artificial Moral Agent (AMA) that mimics our ethical decisions by borrowing knowledge from Internet users. The agent utilizes opinion mining and sentiment analysis techniques to decide what is moral and what is not by adopting a moral position supported by more than 2/3 of retrieved sentences. The agent takes no position where the survey result is not above this threshold. Ignoring cases in which distinct polarities are absent allows the system to create a safety valve, leaving moral judgment over morally ambiguous situations to a human user. These ambiguous cases are not forgotten, however, as the system keeps these ambiguous sentences in the knowledge base for further contextual analysis. The system utilizes only a very simple set of shallow searching techniques and the results have been impressive. That said, as we show in this chapter, more reliable Natural Language Processing tools and techniques alongside human-coded moral lexicons working together to increase integration of available information, should be expected to improve results of future trials. At the core of our system are lexicons, sets of selected keywords, based on different philosophical ideas as Bentham's Felicific Calculus (Bentham, 1789) for estimating average emotional outcomes of acts, and Kohlberg's stages of moral development for retrieving possible social consequences (Rzepka & Araki, 2012).

Internet as Knowledge Source about Human Behaviors

Our methods should be attractive to researchers from humanities, especially sociologists, cognitive scientists and psychologists who study bloggers’ behavior, or who study for example a role of gossip and other forms of criticism and its role in moral development. Until recently, the WWW has been treated as a massive garbage can full of sex and violence which is not useful for intelligent machines. With this chapter we want to attract the Machine Ethics (ME) community’s attention to the fact that computers with constantly improving NLP tools and minimal human input (only 258 keywords divided into two categories on the proposed method) are capable of:

  • 1.

    Filtering meaningless noise;

  • 2.

    Reading stories of people whose majority, surprisingly for many, seems to represent healthy common sense;

  • 3.

    Reusing the discovered knowledge in existing or newly created moral solutions.

Packed with descriptions of unreal worlds and games where killing is fun, the WWW is “knowledge soup” in which our machines can slowly learn how to distinguish fantasy from more realistic stories, avoiding logical yet unreasonable conclusions such as “people can fly on broomsticks because Harry Potter can”. This capacity, to distinguish useful from non-useful information, is made more difficult by the fact that, when bloggers create different worlds and write of knights and princesses and evil wizards, they share their human emotions, describe punishments for evil deeds and emphatically react to happy and unhappy moments. Rather than explude all of this information as noise, we see this information as useful knowledge about what people care about, as simulations of reality suggesting what one would do if one's object of care faced danger. This is the core idea of our approach.

Complete Chapter List

Search this Book: