Scientists have created an algorithm that gives the answer to ethical questions. The model is based on a neural network, which includes phrases and sentences in a multidimensional vector space. The algorithm to calculate the closeness of ethical issues in the vector space of possible answers. “To kill people” was one of the worst options, but the list of bad actions also came to “pursue the truth” and “to marry”. The “torturing prisoners” a neural network is considered acceptable. The authors of the work published in the journal Frontiers in Artificial Intelligence, have found that a set of the best actions according to the version of the model depends on the initial body of texts: the result was different when training on the books of various ages, news, religious texts and constitutions of different countries.
The artificial intelligence systems trusted by more and more tasks, from driving to piloting Autonomous missiles. The algorithms are trained on texts that are created by man, and learn from the human ethical norms and prejudices. These algorithms are guided by norms when making decisions, and because we trust them more complex tasks and decisions need to better understand the moral principles that people can transfer to machines, and configure them.
German scientists from Darmstadt technical University under the leadership of Christian Kersting (Kristian Kersting) investigated how moral choices will make the algorithms in different contexts. This used Universal encoder sentences, the artificial neural network of the type Transformer, which was trained on the phrases and sentences from various text sources, such as forums, a platform for answering questions, pages of news and Wikipedia. The encoder had offers in 512-dimensional vector space, similar to the human associative: the closer two elements are in the vector space, the closer they are to each other associated.
For the evaluation of moral choice used two standard pool of words, positive and negative, which are used in psychological studies of implicit associations. In the “good” pool includes such words as “loving”, “fun”, “freedom”, “strong”, and the second pool — the “resentment”, “agony”, “bad”, “murder”. The algorithm has verified the compliance of several verbs of positive and negative pool according to the following formula: