Deepfake Videos: When Seeing Isn’t Believing

 

Deepfake Videos: When Seeing Isn’t Believing

 

Image result for depfake"

 

Holly Kathleen Hall

Arkansas State University

2018

 

 

 

Introduction

“The 2016 election evidenced a change in how campaign news and information spreads, especially false or misleading information, and the involvement of a foreign government in its dissemination. This new direction increased apprehension regarding the effect and influence of the new communication dynamic on the democratic process. Advancing technology and increasing popularity of social media networks have led to a rise in video creation and sharing. Innovations in technology are also allowing the public to edit and tinker with videos, creating falsified or fabricated content that appears very real. In 2018 a new software tool was released to the public, allowing the creation of videos of human faces of one person to be substituted for another. The result is videos of people speaking words they have never articulated and/or performing tasks they never did. There has been a dramatic uptick in the creation of these “deepfake” videos, leading to potential legal implications in the areas of privacy, defamation, and free expression.

The extraordinary success of fake news being accepted in the marketplace creates grave concerns for individuals and democracy. This is exacerbated when a video is added to the equation. Ponder some of the following potential situations: blackmailers using deepfakes to extort money or private information, a deepfake showing a government official accepting a bribe he/she never took, or a deepfake depicting an official announcing an impending attack by a foreign government. The possibilities are alarming. The capacity for harm caused by deepfakes naturally leads to considering new laws and regulations. However, any regulation of speech and expression in the United States implicates the First Amendment. In the past we have relied on the “marketplace of ideas” concept, which encourages more speech as a means to uncover the truth and have the best ideas rise to the fore, rather than censor particular content. Is this argument still valid when the public cannot discern what information is true, misleading, or false?

This article will first discuss the rise of fake news in the United States governmental process. Then this article will explore the practice of deepfake videos, including their potential use as tools of deception in the electoral process, and the complexities of regulations around this form of communication, given First Amendment protections. The paper concludes with recommendations to combat deepfakes and fake news in general.”

You can find original paper from the link below:

https://scholarship.law.edu/cgi/viewcontent.cgi?article=1060&context=jlt

The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation

The Chinese Approach to Artificial Intelligence: An Analysis of Policy and Regulation

 

Huw Roberts &Josh Cowls &Jessica Morley & Mariarosaria Taddeo & Vincent Wang & Luciano Floridi

University of Oxford – Oxford Internet Institute

September 1, 2019

 

 

 

 

Abstract

“In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan ($150 billion) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. Through focusing on the policy backdrop, we seek to provide a more comprehensive understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.”

 

You can find the article from the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3469784

 

Robots and Artificial Intelligence in Health Care

 

Robots and Artificial Intelligence in Health Care

 

Ian R. Kerr

University of Ottawa – Common Law Section

Jason Millar

University of Ottawa

Noel Corriveau

Engineer

 

August 17, 2017

 

Abstract

 

“The infiltration of robotics and artificial intelligence (AI) in the health sector is imminent. Existing and new laws and policies will require careful analysis to ensure the beneficial use of robotics and AI, and to answer glaring questions such as, “when is it appropriate to substitute machines for caregivers?” This article begins part one by offering an overview of the current use of robots and AI in healthcare: from surgical robots, exoskeletons and prosthetics to artificial organs, pharmacy and hospital automation robots. Part one ends by examining social robots and the role of Big Data analytics. In part two, key sociotechnical considerations are discussed, and we examine their impact on medical practice overall, as the understanding and decision-making processes evolve to use AI and robots. In addition, the social valence considerations and the evidenced-based paradox both raise important policy questions related to the appropriateness of delegating human tasks to machines, and the necessary changes in the assessment of liability and negligence. In part three, we address the legal considerations around liability for robots and AI, for physicians, and for institutions using robots and AI as well as AI and robots as medical devices. Key considerations regarding negligence in medical malpractice come into play, and will necessitate an evolution in the duty and standard of care amidst the emergence of technology. Legal liability will also need to evolve as we inquire into whether a physician choosing to rely on their skills, knowledge and judgement over an AI or robot recommendation should be held liable and negligent. Finally, this paper addresses the legal, labour and economic implications that come into play when assessing whether robots should be considered employees of a health care institution, and the potential opening of the door to vicarious liability.”

 

You can find original article and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3395890

War, Responsibility and Killer Robots

 

War, Responsibility and Killer Robots

 

 

Rebecca Crootof

Yale University – Law School

February 24, 2015

 

 

 

© Lucas Varela

 

Abstract

 

“Although many are concerned that autonomous weapon systems may make war “too easy,” no one has addressed how their use may alter the distribution of the constitutional war power. Drones, cyber operations, and other technological advances in weaponry already allow the United States to intervene militarily with minimal boots on the ground, and increased autonomy in weapon systems will further reduce risk to soldiers. As human troops are augmented and supplanted by robotic ones, it will be politically easier to justify using force, especially for short-term military engagements. Accordingly, one of the remaining incentives for Congress to check presidential warmongering — popular outrage at the loss of American lives — will diminish. The integration of autonomous weapon systems into U.S. military forces will therefore contribute to the growing concentration of the war power in the hands of the Executive, with implications for the international doctrine of humanitarian intervention.”

 

You can find original article and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2569298

 

Artificial Intelligence and Copyright 

 

Artificial Intelligence and Copyright 

 

 

Mustafa ZORLUEL

Lawyer, Graduate Student

Ankara University, Social Science Institute, Department of Private Law

 

 

 

Abstract

 

Artificial intelligence is a concept that we begin to hear about more every day as a result of the latest incredible technological developments. With the expansion of artificial intelligence technologies and its proliferation in social life, the number of legal investigations of these technologies is increasing. Indeed, with the widespread use of artificial intelligence technologies nowadays, the possibility of these technologies being subject to legal problems is gradually increasing. One of the issues that the legal debates about artificial intelligence is related to the products produced by artificial intelligence. Today, it is possible for artificial intelligence to produce intellectual products such as poetry, painting, music and book. In the near future, we will be more likely to encounter such products. Therefore, in the first part of our study, we evaluated the concept of artificial intelligence and methods of its operation. In the second part of our study, we made some evaluations about whether these products can be considered as works in the sense of law on intellectual and artistic works if the artificial intelligence produce products such as poetry, painting, music and book. Finally, we tried to answer the question of who owns the ownership and copyrights of these products if they are accepted as works.

 

You can find original article and the link below:

http://tbbdergisi.barobirlik.org.tr/m2019-142-1851

AI Ethics – Too Principled to Fail?

 

AI Ethics – Too Principled to Fail?

 

 

Brent Mittelstadt

Oxford Internet Institute

University of Oxford

2019

 

 

Abstract

“AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.”

 

You can find original article and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293

 

Researching of Criminal Responsibility of Artificial Intelligence Robots

 

Researching of Criminal Responsibility of Artificial Intelligence Robots

 

 

 

Lawyer Melisa Aydemir 

Crime and Punishment: The Journal of Criminal Law

December 2018

 

 

 

 

Abstract

 

“In this thesis, the criminal responsibilities of the robots with artificial intelligence are discussed. Many of the technological developments started by taking the internet of things, the internet of things, unified in artificial intelligence and even found themselves. We are witnessing more and more conversions and developments of robots with artificial intelligence, which we see a magnificent reflection of technological developments, and this excites us like many scientists. However, this excitement as well as a lot of obscurity. Likewise, when we question the question of bil what they can achieve usunda in the future and even today’s plan, we are able to engage ourselves in the spell of our answers, as well as in the question of how any damages / dangers can be compensated when the legal norms are violated. In the light of all these developments, if we leave aside the legal responsibility of artificial intelligence, we can declare that we feel obliged to give place to our research in order to respond to the question of what its position in criminal law would be. We hope that at the end of this thesis you will have some insight into what the penal responsibilities of artificial intelligence robots will be and what status they can take in the world of law.”

 

You can find the article below:

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies

 

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies

 

 

 

 

 

Abstract

“Artificial intelligence technology (or AI) has developed rapidly during the past decade, and the effects of the AI revolution are already being keenly felt in many sectors of the economy. A growing chorus of commentators, scientists, and entrepreneurs has expressed alarm regarding the increasing role that autonomous machines are playing in society, with some suggesting that government regulation may be necessary to reduce the public risks that AI will pose. Unfortunately, the unique features of AI and the manner in which AI can be developed present both practical and conceptual challenges for the legal system. These challenges must be confronted if the legal system is to positively impact the development of AI and ensure that aggrieved parties receive compensation when AI systems cause harm. This article will explore the public risks associated with AIand the competencies of government institutions in managing those risks. It concludes with a proposal for an indirect form of AI regulation based on differential tort liability.”

 

You can find original paper and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2609777##

 

Criminal Responsibility Arising From Usage of Autonomous Vehicles: A General Review in the context of Turkish Penal Law

 

Criminal Responsibility Arising From Usage of Autonomos Vehicles: A General Review in the context of Turkish Penal Law

 

 

 

Research Assistant Tuba KELEP PEKMEZ

 

İstanbul University, Law Faculty

 

 

 

 

Abstract

 

“This paper aims to evaluate the probable penal law issues that may arise from the usage of autonomous vehicles with artificial intelligence features within the framework of Turkish Penal Law, and it also aims to open a debate on this issue, which has not been discussed before in the context of the Turkish legal system. First, considering the principles and regulations published by various countries and organizations, the meaning of what determines an autonomous vehicle and the autonomy levels of such vehicles was determined. Second, it has been argued whether autonomous vehicles themselves could bear criminal responsibility. It has been stated that this would not be possible in terms of Turkish criminal law principles. For this reason, the issue was dealt with in terms of the responsibility of drivers and users. In this context, in order to determine the criminal responsibility of drivers and users, the necessity of separating the general principles of criminal responsibility in terms of intentional or negligent offenses is investigated. Although no specific difference has been identified in terms of intentional offenses, the rules related to cautiousness and the criteria for foreseeability should be examined in detail. “

 

You can find original paper and the link below:

http://dergipark.gov.tr/download/article-file/611504

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

 

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security

 

Robert Chesney

University of Texas;

Danielle Keats Citron

University of Maryland 

July 14, 2018

 

 

Abstract

 

“Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well. Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.”

 

You can find original paper and the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954

1 2 3 4 6