Personal Robots and Personal Data

 

Personal Robots and Personal Data

 

 

Gizem Gültekin Várkonyi

Assistant Coordinator, PhD Fellow

University of Szeged

Faculty of Law and Political Sciences

 

 

Abstract

In this blog post, I argue whether, and at what level, it is possible to exercise right to personal data protection in the era of Social Robots with Artificial Intelligence (hereafter, Social Robot). I analyze the concept of consent that was strengthened in European Union’s General Data Protection Regulation (GDPR). I basically reach to such a conclusion that, a Social Robot at personal usage challenges practicability of the GDPR. This conclusion derives from, first, Social Robot’s ability to collect vast amount of data naturally, e.g. via natural Human-Robot Interaction, or when it connects to Internet. Since a personal Social Robot’s life source, its blood, is personal data, it would be absurd for a user to not to give consent to get more personal services. In addition, it is well-known that most of the users do not read/listen consent texts, or do not understand even if they do so. Moreover, it is not easy to answer to the question of whether consent could be validly given for purposes that even the developer is not able to foresee (Unpredictable by Design). Finally, even if consent was validly given, it is not possible to make Social Robot to “forget” about the personal data in subject, in case when a particular personal data became an organic part of robot’s Neural Network. Otherwise, how consent could be withdrawn from a Social Robot should also be questioned.

 

For more details about author:

https://robotic.legal/author/ggvarkonyi/

Liability in Robotics:An International Perspective on Robots as Animals

Liability in Robotics: An International Perspective on Robots as Animals

 

İlgili resim

 

Richard Kelley, Enrique Schaerer,

Micaela Gomez and Monica Nicolescu

University of Nevada

2010

 

 

Abstract

“As service robots become increasingly common in society, so too will accidents involving service robots. Current law functions effectively to adjudicate the disputes that arise from such accidents, but as technology improves and robot autonomy grows, it will become much harder to apply currently-existing laws. Instead, new legal frameworks will have to be developed to address questions of liability in human-robot interaction. We have already proposed the framework “Robots As Animals,” in which robots are analogized to domesticated animals for legal purposes in disputes about liability. In our initial presentation, though, we focused exclusively on the common law in the United States Federal Government. In this paper, we examine the laws concerning domesticated animals in countries in Europe, Asia, and North America. We apply the lessons learned from our analysis to build an expanded framework that better reflects the established norms of several nations and more explicitly balances the competing interests of producers and consumers of robot technology. We also provide examples of ways in which our new framework may be applied.”

 

You can reach the article from the link below:

https://pdfs.semanticscholar.org/8174/98e6a34922df365854c6d7fd2f18a8d1900d.pdf

Do We Need Emotionally Intelligent Artificial Agents?

Do We Need Emotionally Intelligent Artificial Agents?

 

 

Lisa Fan,
Matthias Scheutz,
Monika Lohani,
Marissa McCoy ve Charlene Stokes

University of Utah 

 

 

Abstract

“Humans are very apt at reading emotional signals in other humans and even artificial agents, which raises the question of whether artificial agents need to be emotionally intelligent to ensure effective social interactions. For artificial agents without emotional intelligence might generate behavior that is misinterpreted, unexpected, and confusing to humans, violating human expectations and possibly causing emotional harm. Surprisingly, there is a dearth of investigations aimed at understanding the extent to which artificial agents need emotional intelligence for successful interactions. Here, we present the first study in the perception of emotional intelligence (EI) in robots vs. humans. The objective was to determine whether people viewed robots as more or less emotionally intelligent when exhibiting similar behaviors as humans, and to investigate which verbal and nonverbal communication methods were most crucial for human observational judgments. Study participants were shown a scene in which either a robot or a human behaved with either high or low empathy, and then they were asked to evaluate the agent’s emotional intelligence and trustworthiness. The results showed that participants could consistently distinguish the high EI condition from the low EI condition regardless of the variations in which communication methods were observed, and that whether the agent was a robot or human had no effect on the perception. We also found that relative to low EI high EI conditions led to greater trust in the agent, which implies that we must design robots to be emotionally intelligent if we wish for users to trust them.”

 

You can reach the article from the link below:

https://hrilab.tufts.edu/publications/fanetal17iva.pdf


For Citation :

Selin Cetin
"Do We Need Emotionally Intelligent Artificial Agents?"
Hukuk & Robotik, Saturday January 20th, 2018
https://robotic.legal/en/do-we-need-emotionally-intelligent-artificial-agents/- 24/06/2021