50 Questions on AI

 

50 Questions on AI

 

 

The book named “50 Questions on AI” of Prof Cem Say from Bogazici University, the Department of Computer Science Engineering, is published this month. The book has a rich content and many people of various areas show interest to it.

While Say asks questions on several usages on computers, he uses a language that embraces readers by including his memories. By the narrative telling, he succeeds to keep at a level that the readers of all interests can understand technical information. 

 

First I, as a lawyer, read impatiently the part of “Can computers be lawyers?“. He says that these systems are more successful than lawyers in terms of documentation, case law search and decision making, and I think he is pretty right. Many tiresome tasks that take hours of the lawyers, can be facilitated by these systems.

The last question reminds our limits: “Does the human intelligence have a future?”  While he mentions that “these additional brains” can be useful to us, drawing a positive frame, he underlines that abuse by oppressive governments can curb humanity.

In addition to the above, the book has questions related to many current and social topics as. “Can robots fall in love?“, “Can computers make an invention?” or “Can robots be enlisted to military?” etc.

Reading this book is very delighting and fluent… I do not want to give away more details, but the last sentence of the book is very promising: “We can succeed!” 

 

 

A small wish: I hope it will be translated into English. 🙂

 

Selin

Machine Learning and Law

Machine Learning and Law

 

Lawyering in the age of algorithms

 

Harry Surden

 

University of Colorado Law School

 

March 26,  2014

 

 

 

 

Abstract

“This Article explores the application of machine learning techniques within the practice of law. Broadly speaking “machine learning” refers to computer algorithms that have the ability to “learn” or improve in performance over time on some task. In general, machine learning algorithms are designed to detect patterns in data and then apply these patterns going forward to new data in order to automate particular tasks. Outside of law, machine learning techniques have been successfully applied to automate tasks that were once thought to necessitate human intelligence — for example language translation, fraud-detection, driving automobiles, facial recognition, and data-mining. If performing well, machine learning algorithms can produce automated results that approximate those that would have been made by a similarly situated person.

This Article begins by explaining some basic principles underlying machine learning methods, in a manner accessible to non-technical audiences. The second part explores a broader puzzle: legal practice is thought to require advanced cognitive abilities, but such higher-order cognition remains outside the capability of current machine-learning technology. This part identifies a core principle: how certain tasks that are normally thought to require human intelligence can sometimes be automated through the use of non-intelligent computational techniques that employ heuristics or proxies (e.g., statistical correlations) capable of producing useful, “intelligent” results. The third part applies this principle to the practice of law, discussing machine-learning automation in the context of certain legal tasks currently performed by attorneys: including predicting the outcomes of legal cases, finding hidden relationships in legal documents and data, electronic discovery, and the automated organization of documents.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2417415

Policy Recommendations For A Safe And Secure Use Of Artificial Intelligence, Automated Decision-Making, Robotics And Connected Devices In A Modern Consumer World

 

Policy Recommendations For A Safe And Secure Use Of Artificial Intelligence, Automated Decision-Making, Robotics And Connected Devices In A Modern Consumer World

 

European Consumer Consultative Group Opinion

 May 16, 2018

 

Executive Summary

“Machine-learning and automated decision-making in a consumer context

The increasing use of self-learning algorithms and machine-learning that steer processes and take decisions on behalf or instead of humans inevitably leads to a set of societal and ethical questions. From a consumer point of view, Algorithmic Decision Making (ADM), de facto automated decision-making, based on big data, is of particular interest and high importance as the number of affected consumers could potentially be high. As a matter of fact, the ranges for application of ADM in consumers’ everyday lives are virtually endless. Artificial intelligence is also no science-fiction of distant future times. Examples include algorithms used by online retailers to tailor prices to individual consumers based on estimates of their location and by self-driving cars to go around.

It is therefore essential that the European regulatory framework of consumer protection is able to meet the challenges posed not only by connected devices but also by automated decision-making. Can we still speak about consumer choice when preferences are defined, predicted, and shaped by algorithms? Consumer organisations call on the European Institutions to assess and revise relevant consumer protection legislation to ensure that consumers rights are respected by algorithms and automated decision making. An elaborated form of accountability and ethical processing is needed to foster the benefits of this use of data but to also address any consequent risks.”

 

You can find the link and original recommendation below:

https://ec.europa.eu/info/sites/info/files/eccg-recommendation-on-ai_may2018_en.pdf

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem

 

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem

 

Yapay zeka, yemekte kedinizi pişirebilir!

 

Amanda Levendowski

New York University School of Law

July 24, 2017

 

 

 

 

Abstract:

 

“As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law—the fair use doctrine—has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938

 

Is Tricking a Robot Hacking?

 

Is Tricking a Robot Hacking?

 

Ryan Calo

Ivan Evtimov

Earlence Fernandes

Tadayoshi Kohno

David O’Hair

University of Washington 

 

Abstract

“The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.

A new set of techniques, aimed not at breaking into computers but at manipulating the increasingly intelligent machine learning models that control them, may force law and legal institutions to reevaluate the very nature of hacking. Three of the authors have shown, for example, that it is possible to use one’s knowledge of a system to fool a driverless car into perceiving a stop sign as a speed limit. Other techniques build secret blind spots into machine learning systems or seek to reconstruct the private data that went into their training.

The unfolding renaissance in artificial intelligence (AI), coupled with an almost parallel discovery of its vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice include inadequate coverage of crime, missing or skewed security incentives, and the prospect of chilling critical security research. This last one is particularly dangerous in light of the important role researchers can play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence appears to present.

The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986 — the paradigmatic federal anti-hacking law — as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.”

 

You can find the link and original paper below: 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3150530

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Related image

Article 19 & Privacy International

April, 2018

 

Executive Summary

“Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people Access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.

AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.

While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.

This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;
  2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

We believe that policy and technology responses in this area must:

  • Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
  • Ensure accountability and transparency of AI;
  • Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
  • Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
  • Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.”

 

You can find the link and original report below:

https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

When AIs Outperform Doctors

When AIs Outperform Doctors

 

Related image

 

 

A. Michael Froomkin,  

Ian Kerr &

Joëlle Pineau

We Robot 2018 Conference

 

 

 

Abstract

“Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?

This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3114347

Automated Individual Decision-Making And Profiling

Automated Individual Decision-Making And Profiling

 

Ä°lgili resim

ARTICLE 29 DATA PROTECTION WORKING PARTY

3 October 2017       17/EN WP 251 

 

 

INTRODUCTION

The General Data Protection Regulation (the GDPR), specifically addresses profiling and automated individual decision-making, including profiling.

Profiling and automated decision-making are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

The widespread availability of personal data on the internet and from Internet of Things (IoT) devices, and the ability to find correlations and create links, can allow aspects of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society as a whole, delivering benefits such as: increased efficiencies; and resource savings.

They have many commercial applications, for example, they can be used to better segment markets and tailor services and products to align with individual needs. Medicine, education, healthcare and transportation can also all benefit from these processes.

However, profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.

These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved.

Profiling can perpetuate existing stereotypes and social segregation. It can also lock a person into a specific category and restrict them to their suggested preferences. This can undermine their freedom to choose, for example, certain products or services such as books, music or newsfeeds. It can lead to inaccurate predictions, denial of services and goods and unjustified discrimination in some cases.

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy. The purpose of these guidelines is to clarify those provisions.

This document covers:

-Definitions of profiling and automated decision-making and the GDPR approach to these in general – Chapter II

-Specific provisions on automated decision-making as defined in Article 22 – Chapter III

-General provisions on profiling and automated decision-making – Chapter IV

-Children and profiling – Chapter V

-Data protection impact assessments – Chapter VI

 

The Annexes provide best practice recommendations, building on the experience gained in EU Member States.

 

 

You can reach the guidelines from the link below:

http://ec.europa.eu/newsroom/document.cfm?doc_id=47742

Big Data, Artificial Intelligence, Machine Learning and Data Protection

Big Data, Artificial Intelligence, Machine Learning and Data Protection

 

Information Commissioner’s Office 

4 Sept 2017

 

FOREWORD

Big data is no fad. Since 2014 when my office’s first paper on this subject was published, the application of big data analytics has spread throughout the public and private sectors. Almost every day I read news articles about its capabilities and the effects it is having, and will have, on our lives. My home appliances are starting to talk to me, artificially intelligent computers are beating professional board-game players and machine learning algorithms are diagnosing diseases.

The fuel propelling all these advances is big data – vast and disparate datasets that are constantly and rapidly being added to. And what exactly makes up these datasets? Well, very often it is personal data. The online form you filled in for that car insurance quote. The statistics your fitness tracker generated from a run. The sensors you passed when walking into the local shopping centre. The social-media postings you made last week. The list goes on…

So it’s clear that the use of big data has implications for privacy, data protection and the associated rights of individuals – rights that will be strengthened when the General Data Protection Regulation (GDPR) is implemented. Under the GDPR, stricter rules will apply to the collection and use of personal data. In addition to being transparent, organisations will need to be more accountable for what they do with personal data. This is no different for big data, AI and machine learning.

However, implications are not barriers. It is not a case of big data ‘or’ data protection, or big data ‘versus’ data protection. That would be the wrong conversation. Privacy is not an end in itself, it is an enabling right. Embedding privacy and data protection into big data analytics enables not only societal benefits such as dignity, personality and community, but also organisational benefits like creativity, innovation and trust. In short, it enables big data to do all the good things it can do. Yet that’s not to say someone shouldn’t be there to hold big data to account.

In this world of big data, AI and machine learning, my office is more relevant than ever. I oversee legislation that demands fair, accurate and non-discriminatory use of personal data; legislation that also gives me the power to conduct audits, order corrective action and issue monetary penalties. Furthermore, under the GDPR my office will be working hard to improve standards in the use of personal data through the implementation of privacy seals and certification schemes. We’re uniquely placed to provide the right framework for the regulation of big data, AI and machine learning, and I strongly believe that our efficient, joined-up and co-regulatory approach is exactly what is needed to pull back the curtain in this space. Big data, artificial intelligence, machine learning and data protection

So the time is right to update our paper on big data, taking into account the advances made in the meantime and the imminent implementation of the GDPR. Although this is primarily a discussion paper, I do recognise the increasing utilisation of big data analytics across all sectors and I hope that the more practical elements of the paper will be of particular use to those thinking about, or already involved in, big data.

This paper gives a snapshot of the situation as we see it. However, big data, AI and machine learning is a fast-moving world and this is far from the end of our work in this space. We’ll continue to learn, engage, educate and influence – all the things you’d expect from a relevant and effective regulator.

Elizabeth Denham
Information Commissioner

 

You can reach all of the report from the link below:

https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf


For Citation :

Selin Cetin
"Big Data, Artificial Intelligence, Machine Learning and Data Protection"
Hukuk & Robotik, Saturday January 27th, 2018
https://robotic.legal/en/big-data-artificial-intelligence-machine-learning-and-data-protection/- 17/05/2021