“With the development of technology, artificial intelligence assets have started to be used in almost every area of our daily lives. It is anticipated that these assets, which are now available to people service and make their jobs easier, will achieve human status and do some professions in the future. These developments also bring some legal and criminal questions. What are the legal status of artificial intelligence assets, who will have criminal responsibility for crimes that arise due to their use, and what is their role in criminal proceedings are among the questions to be answered. The purpose of this study is to make an assessment on these questions with existing legal regulations. The methodology of the study is carried out as a literature review. As a result of the study, it has been concluded that artificial intelligence assets are in the status of ‘property’, cannot be held responsible for crimes due to their use, and cannot be replaced by the subjects (judges, prosecutors, lawyers) of the proceedings, although they have important contributions to the criminal proceedings. In this context, existing legal regulations are sufficient to solve the problems that will arise. However, if artificial intelligence assets acquire ‘human’ status as a fully autonomous and conscious entity, radical changes will be required in our legal system.”
Law and Artificial Intelligence: E-Person, Liability, and A Legal Application Example
Prof. Dr. Zafer ZEYTİN
Turkish-German University, Law Faculty
Dr. Eray GENÇAY
Tübingen University, Department of Computer Science
“Nowadays artificial intelligence is used in many fields such as city planning, production, automation, medicine and security. In legal field, artificial intelligence research is making progress since 30 years. Nevertheless, many questions about the implementation are still open. The interaction between artificial intelligence and law is examined on two levels in this study. Firstly, it is discussed whether artificial intelligence systems can be a legal subject, and if so, what could be the consequences and implications. Secondly, it is discussed how law as a discipline can be supported with artificial intelligence systems. An example is put forward as to how such systems can be designed to be implemented in marital property regimes.”
“The infiltration of robotics and artificial intelligence (AI) in the health sector is imminent. Existing and new laws and policies will require careful analysis to ensure the beneficial use of robotics and AI, and to answer glaring questions such as, “when is it appropriate to substitute machines for caregivers?” This article begins part one by offering an overview of the current use of robots and AI in healthcare: from surgical robots, exoskeletons and prosthetics to artificial organs, pharmacy and hospital automation robots. Part one ends by examining social robots and the role of Big Data analytics. In part two, key sociotechnical considerations are discussed, and we examine their impact on medical practice overall, as the understanding and decision-making processes evolve to use AI and robots. In addition, the social valence considerations and the evidenced-based paradox both raise important policy questions related to the appropriateness of delegating human tasks to machines, and the necessary changes in the assessment of liability and negligence. In part three, we address the legal considerations around liability for robots and AI, for physicians, and for institutions using robots and AI as well as AI and robots as medical devices. Key considerations regarding negligence in medical malpractice come into play, and will necessitate an evolution in the duty and standard of care amidst the emergence of technology. Legal liability will also need to evolve as we inquire into whether a physician choosing to rely on their skills, knowledge and judgement over an AI or robot recommendation should be held liable and negligent. Finally, this paper addresses the legal, labour and economic implications that come into play when assessing whether robots should be considered employees of a health care institution, and the potential opening of the door to vicarious liability.”
“This essay, written as a response to Ryan Calo’s valuable discussion in “Robotics and the Lessons of Cyberlaw,” describes key problems that robotics and artificial intelligence (AI) agents present for law.
The first problem is how to distribute rights and responsibilities among human beings when non-human agents create benefits like artistic works or cause harms like physical injuries. The difficulty is caused by the fact that the behavior of robotic and AI systems is “emergent;” their actions may not be predictable in advance or constrained by human expectations about proper behavior. Moreover, the programming and algorithms used by robots and AI entities may be the work of many hands, and may employ generative technologies that allow innovation at multiple layers. These features of robotics and AI enhance unpredictability and diffusion of causal responsibility for what robots and AI agents do.
Lawrence Lessig’s famous dictum that “Code is Law” argued that combinations of computer hardware and software, like other modalities of regulation, could constrain and direct human behavior. Robotics and AI present the converse problem. Instead of code as a law that regulates humans, robotics and AI feature emergent behavior that escapes human planning and expectations. Code is lawless.
The second problem raised by robotics and AI is the “substitution effect.” People will substitute robots and AI agents for living things — and especially for humans. But they will do so only in certain ways and only for certain purposes. In other words, people tend to treat robots and AI agents as special-purpose animals or special-purpose human beings. This substitution is likely to be incomplete, contextual, unstable, and often opportunistic. People may treat the robot as a person (or animal) for some purposes and as an object for others. The problem of substitution touches many different areas of law, and it promises to confound us for a very long time.
Finally, the essay responds to Calo’s argument about the lessons of cyberlaw for robotics. Calo argues that lawyers should identify the “essential characteristics” of robotics and then ask how the law should respond to the problems posed by those essential characteristics. I see the lessons of cyberlaw quite differently. We should not think of essential characteristics of technology independent of how people use technology in their lives and in their social relations with others. Because the use of technology in social life evolves, and because people continually find new ways to employ technology for good or for ill, it may be unhelpful to freeze certain features of use at a particular moment and label them “essential characteristics.” Innovation in technology is not just innovation of tools and techniques; it may also involve innovation of economic, social and legal relations. As we innovate socially and economically, what appears most salient and important about our technologies may also change.”
“Autonomous cars are the new trend in the automotive sector. Every day, the manufacturers announce or introduce new models based on new technologies. But what about the damages which are caused because of defect sensors or defect software of the car? Do we need changes in our legal system which is based upon the strict liability of the vehicle owner, the fault-based liability of the driver and the liability of the producer, the foundation on of which has caused many disputes among the Turkish scholars? The aim of this paper is to address and nd possible solutions to these and other questions in connection with autonomous driving.”