In this blog post, I argue whether, and at what level, it is possible to exercise right to personal data protection in the era of Social Robots with Artificial Intelligence (hereafter, Social Robot). I analyze the concept of consent that was strengthened in European Union’s General Data Protection Regulation (GDPR). I basically reach to such a conclusion that, a Social Robot at personal usage challenges practicability of the GDPR. This conclusion derives from, first, Social Robot’s ability to collect vast amount of data naturally, e.g. via natural Human-Robot Interaction, or when it connects to Internet. Since a personal Social Robot’s life source, its blood, is personal data, it would be absurd for a user to not to give consent to get more personal services. In addition, it is well-known that most of the users do not read/listen consent texts, or do not understand even if they do so. Moreover, it is not easy to answer to the question of whether consent could be validly given for purposes that even the developer is not able to foresee (Unpredictable by Design). Finally, even if consent was validly given, it is not possible to make Social Robot to “forget” about the personal data in subject, in case when a particular personal data became an organic part of robot’s Neural Network. Otherwise, how consent could be withdrawn from a Social Robot should also be questioned.
Liability in Robotics: An International Perspective on Robots as Animals
Richard Kelley, Enrique Schaerer,
Micaela Gomez andMonica Nicolescu
University of Nevada
“As service robots become increasingly common in society, so too will accidents involving service robots. Current law functions effectively to adjudicate the disputes that arise from such accidents, but as technology improves and robot autonomy grows, it will become much harder to apply currently-existing laws. Instead, new legal frameworks will have to be developed to address questions of liability in human-robot interaction. We have already proposed the framework “Robots As Animals,” in which robots are analogized to domesticated animals for legal purposes in disputes about liability. In our initial presentation, though, we focused exclusively on the common law in the United States Federal Government. In this paper, we examine the laws concerning domesticated animals in countries in Europe, Asia, and North America. We apply the lessons learned from our analysis to build an expanded framework that better reflects the established norms of several nations and more explicitly balances the competing interests of producers and consumers of robot technology. We also provide examples of ways in which our new framework may be applied.”
Do We Need Emotionally Intelligent Artificial Agents?
Lisa Fan, Matthias Scheutz, Monika Lohani, Marissa McCoy ve Charlene Stokes
University of Utah
“Humans are very apt at reading emotional signals in other humansand even artificial agents, which raises the question of whether artificial agents need to be emotionally intelligent to ensure effective social interactions. For artificial agents without emotional intelligence might generate behavior that is misinterpreted, unexpected, and confusing to humans, violating human expectations and possibly causing emotional harm. Surprisingly, there is a dearth of investigations aimed at understanding the extent to which artificial agents need emotional intelligence for successfulinteractions. Here, we present the first study in the perception ofemotional intelligence (EI) in robots vs. humans. The objective was to determine whether people viewed robots as more or less emotionally intelligent when exhibiting similar behaviors as humans, and to investigate which verbal and nonverbal communication methods were most crucial for human observational judgments. Study participants were shown a scene in which either a robot or a human behaved with either high or low empathy, andthen they were asked to evaluate the agent’s emotional intelligence and trustworthiness. The results showed that participants could consistently distinguish the high EI condition from the low EI condition regardless of the variations in which communication methods were observed, and that whether the agent was a robot or human had no effect on the perception. We also found that relative to low EI high EI conditions led to greater trust in the agent, which implies that we must design robots to be emotionally intelligent if we wish for users to trust them.”