The Role and Future of Artificial Intelligence in Criminal Procedure Law

 

The Role and Future of Artificial Intelligence in Criminal Procedure Law

 

 

 

Dr. Zafer İçer, Marmara University Law Faculty

Research Asst. Başak Buluz, Gebze Technical Universty Engineering Faculty

 

 

 

Abstract

Since the beginning of the present century, innovative technologies have evolved with unprecedented speed; cyber-physical systems and the innovations which produced through the internet linking these systems have created technological era. One of the most important subjects of the digital era is “artificial intelligence systems” what called the catalyzer of industrial digital transformation of course. Artificial intelligence systems are interact with many different disciplines and that touch humanity and everyday life at any point from driveless cars to virtual assistants, from smart home products to industrial automation; and in recent years, as in all legal fields, it has began to take its place in Criminal Procedure. In various countries, intelligent digital assistants have been actively used to identify and analyze concrete legal conflicts and predict the possible consequences of the lawsuits to be opened and artificial intelligence platforms are being used in subjects such as legal analysis and evidence evaluation. Undoubtedly, the common goal of these systems is to provide fast, efficient and accurate solutions in this area. On the other hand, in the near future, robotic systems are likely to become the subjects of judging and they may have important roles in decision-making processes as robot judges, prosecutors and lawyers. In this study, the role and future of these intelligent systems in criminal proceedings will be discussed with a scientific perspective in light of current examples and possible developments by referring to technical aspects of artificial learning and artificial intelligence.”

 

You can find original and full paper from the link below:

Machine Learning and Law

Machine Learning and Law

 

Lawyering in the age of algorithms

 

Harry Surden

 

University of Colorado Law School

 

March 26,  2014

 

 

 

 

Abstract

“This Article explores the application of machine learning techniques within the practice of law. Broadly speaking “machine learning” refers to computer algorithms that have the ability to “learn” or improve in performance over time on some task. In general, machine learning algorithms are designed to detect patterns in data and then apply these patterns going forward to new data in order to automate particular tasks. Outside of law, machine learning techniques have been successfully applied to automate tasks that were once thought to necessitate human intelligence — for example language translation, fraud-detection, driving automobiles, facial recognition, and data-mining. If performing well, machine learning algorithms can produce automated results that approximate those that would have been made by a similarly situated person.

This Article begins by explaining some basic principles underlying machine learning methods, in a manner accessible to non-technical audiences. The second part explores a broader puzzle: legal practice is thought to require advanced cognitive abilities, but such higher-order cognition remains outside the capability of current machine-learning technology. This part identifies a core principle: how certain tasks that are normally thought to require human intelligence can sometimes be automated through the use of non-intelligent computational techniques that employ heuristics or proxies (e.g., statistical correlations) capable of producing useful, “intelligent” results. The third part applies this principle to the practice of law, discussing machine-learning automation in the context of certain legal tasks currently performed by attorneys: including predicting the outcomes of legal cases, finding hidden relationships in legal documents and data, electronic discovery, and the automated organization of documents.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2417415

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Related image

Article 19 & Privacy International

April, 2018

 

Executive Summary

“Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people Access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.

AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.

While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.

This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;
  2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

We believe that policy and technology responses in this area must:

  • Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
  • Ensure accountability and transparency of AI;
  • Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
  • Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
  • Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.”

 

You can find the link and original report below:

https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

Accountability of AI Under the Law

Accountability of AI Under the Law

 

 

 

Finale Doshi-Velez & Mason Kortz

Harvard University

November 27, 2017

 

 

 

Abstract

“The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014].

How can we take advantage of what AI systems have to offer, while also holding them accountable? In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.”

 

You can reach the article from the link below:

https://cyber.harvard.edu/publications/2017/11/AIExplanation

Selin Cetin
"Accountability of AI Under the Law"
Hukuk & Robotik, Saturday February 3rd, 2018
https://robotic.legal/en/yapay-zekanin-yasaya-gore-hesap-verebilirligi/- 15/08/2022