AI Now Report 2018

AI Now Report 2018

 

December 2018

 

RECOMMENDATIONS

 

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation. We need a sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specific approaches include the United States Federal Aviation Administration and the National Highway Traffic Safety Administration.
  1. Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.
  1. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their AI infrastructures can be understood from “nose to tail,” including their ultimate application and use.
  1. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements.
  1. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest.
  1. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientific evidence to back these promises is inadequate and the longer-term consequences are unknown.
  1. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI field as a whole have focused on the “pipeline model,” looking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in their workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with transparency measures about hiring and retention.
  1. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.
  1. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploitation, and community-based infrastructures that enable public participation.
  1. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

You can find  the full report from the link below:

https://ainowinstitute.org/AI_Now_2018_Report.pdf

Artificial Intelligence, Automation and Work

 

Artificial Intelligence, Automation and Work

 

Blockchain and Fintech Investment

 

Daron Acemoglu – MIT

 

Pascual Restrepo – Boston University

 

January 4, 2018

 

 

 

 

Abstract

 

“We summarize a framework for the study of the implications of automation and AI on the demand for labor, wages, and employment. Our task-based framework emphasizes the displacement effect that automation creates as machines and AI replace labor in tasks that it used to perform. This displacement effect tends to reduce the demand for labor and wages. But it is counteracted by a productivity effect, resulting from the cost savings generated by automation, which increase the demand for labor in non-automated tasks. The productivity effect is complemented by additional capital accumulation and the deepening of automation (improvements of existing machinery), both of which further increase the demand for labor. These countervailing effects are incomplete. Even when they are strong, automation increases output per worker more than wages and reduce the share of labor in national income. The more powerful countervailing force against automation is the creation of new labor-intensive tasks, which reinstates labor in new activities and tends to in- crease the labor share to counterbalance the impact of automation. Our framework also highlights the constraints and imperfections that slow down the adjustment of the economy and the labor market to automation and weaken the resulting productivity gains from this transformation: a mismatch between the skill requirements of new technologies, and the possibility that automation is being introduced at an excessive rate, possibly at the expense of other productivity-enhancing technologies.”

 

You can find the link and original paper below:

https://economics.mit.edu/files/14641

The Singularity Is Near When Humans Transcend Biology

 

The Singularity Is Near When Humans Transcend Biology

 

 

 

Genre: Research/Science-Fiction

Author: Raymond Kurzweil

Publication Date: 2005

Publisher: Viking

 

 

 

 

 

 

Ray Kurzweil is one of the foremost futuristics in the world. Forbes qualifies him as “the biggest thinking machine”. In his book “Singularity is near” originally published in 2005, he emphasizes how the improving technology affects humans’ biological structure.

In his book, it is mentioned that the speed of technological change is transforming human life, and the effects of change will get deeper. This period that is named as  singularity by Kurzweil, will completely change the concepts we rely on to make our lives meaningful.

According to Kurzweil, evolution creates a skill, then it uses this skill for developing the next step. No doubt, beginning using this technology in all areas of life, will affect also human evolution. One day technology will dominate every point of biology, including human intelligence.

I advise you to read this book which reveals the destructive effects of technology from past to now and the developments that will transform us in every way in the future, and which shows a futuristics perspective.

 

Selin Çetin

Accountability of AI Under the Law

Accountability of AI Under the Law

 

 

 

Finale Doshi-Velez & Mason Kortz

Harvard University

November 27, 2017

 

 

 

Abstract

“The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014].

How can we take advantage of what AI systems have to offer, while also holding them accountable? In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.”

 

You can reach the article from the link below:

https://cyber.harvard.edu/publications/2017/11/AIExplanation

Selin Cetin
"Accountability of AI Under the Law"
Hukuk & Robotik, Saturday February 3rd, 2018
https://robotic.legal/en/yapay-zekanin-yasaya-gore-hesap-verebilirligi/- 05/03/2021

 

Big Data, Artificial Intelligence, Machine Learning and Data Protection

Big Data, Artificial Intelligence, Machine Learning and Data Protection

 

Information Commissioner’s Office 

4 Sept 2017

 

FOREWORD

Big data is no fad. Since 2014 when my office’s first paper on this subject was published, the application of big data analytics has spread throughout the public and private sectors. Almost every day I read news articles about its capabilities and the effects it is having, and will have, on our lives. My home appliances are starting to talk to me, artificially intelligent computers are beating professional board-game players and machine learning algorithms are diagnosing diseases.

The fuel propelling all these advances is big data – vast and disparate datasets that are constantly and rapidly being added to. And what exactly makes up these datasets? Well, very often it is personal data. The online form you filled in for that car insurance quote. The statistics your fitness tracker generated from a run. The sensors you passed when walking into the local shopping centre. The social-media postings you made last week. The list goes on…

So it’s clear that the use of big data has implications for privacy, data protection and the associated rights of individuals – rights that will be strengthened when the General Data Protection Regulation (GDPR) is implemented. Under the GDPR, stricter rules will apply to the collection and use of personal data. In addition to being transparent, organisations will need to be more accountable for what they do with personal data. This is no different for big data, AI and machine learning.

However, implications are not barriers. It is not a case of big data ‘or’ data protection, or big data ‘versus’ data protection. That would be the wrong conversation. Privacy is not an end in itself, it is an enabling right. Embedding privacy and data protection into big data analytics enables not only societal benefits such as dignity, personality and community, but also organisational benefits like creativity, innovation and trust. In short, it enables big data to do all the good things it can do. Yet that’s not to say someone shouldn’t be there to hold big data to account.

In this world of big data, AI and machine learning, my office is more relevant than ever. I oversee legislation that demands fair, accurate and non-discriminatory use of personal data; legislation that also gives me the power to conduct audits, order corrective action and issue monetary penalties. Furthermore, under the GDPR my office will be working hard to improve standards in the use of personal data through the implementation of privacy seals and certification schemes. We’re uniquely placed to provide the right framework for the regulation of big data, AI and machine learning, and I strongly believe that our efficient, joined-up and co-regulatory approach is exactly what is needed to pull back the curtain in this space. Big data, artificial intelligence, machine learning and data protection

So the time is right to update our paper on big data, taking into account the advances made in the meantime and the imminent implementation of the GDPR. Although this is primarily a discussion paper, I do recognise the increasing utilisation of big data analytics across all sectors and I hope that the more practical elements of the paper will be of particular use to those thinking about, or already involved in, big data.

This paper gives a snapshot of the situation as we see it. However, big data, AI and machine learning is a fast-moving world and this is far from the end of our work in this space. We’ll continue to learn, engage, educate and influence – all the things you’d expect from a relevant and effective regulator.

Elizabeth Denham
Information Commissioner

 

You can reach all of the report from the link below:

https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf


For Citation :

Selin Cetin
"Big Data, Artificial Intelligence, Machine Learning and Data Protection"
Hukuk & Robotik, Saturday January 27th, 2018
https://robotic.legal/en/big-data-artificial-intelligence-machine-learning-and-data-protection/- 05/03/2021

 

Robotics and the Lessons of Cyberlaw

Robotics and the Lessons of Cyberlaw

 

Ryan Calo

University of Washington School of Law

2014

 

 

 

 

Abstract

Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internet’s peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward developing robotics and artificial intelligence.

This Article is the first to examine what the introduction of a new, equally transformative technology means for cyberlaw and policy. Robotics has a different set of essential qualities than the Internet and accordingly will raise distinct legal issues. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways thatcannot be anticipated in advance; and robots increasingly blur the line between person and instrument.

Robotics will prove “exceptional” in the sense of occasioning systematic changes to law, institutions, and the legal academy. But we will not be writing on a clean slate: many of the core insights and methods of cyberlaw will prove crucial in integrating robotics and perhaps whatever technology follows.”

 

You can reach the article from the link below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972


For Citation :

Selin Cetin
"Robotics and the Lessons of Cyberlaw"
Hukuk & Robotik, Friday November 24th, 2017
https://robotic.legal/en/robotics-and-the-lessons-of-cyberlaw/- 05/03/2021

 

Artificial Intelligence, Automation and the Economy

 

Artificial Intelligence, Automation and the Economy

 

 

EXECUTIVE OFFICE OF THE PRESIDENT

WASHINGTON, D.C. 20502

December 20, 2016

 

 

Advances in Artificial Intelligence (AI) technology and related fields have opened up new markets and new opportunities for progress in critical areas such as health, education, energy, economic inclusion, social welfare, and the environment. In recent years, machines have surpassed humans in the performance of certain tasks related to intelligence, such as aspects of image recognition. Experts forecast that rapid progress in the field of specialized artificial intelligence will continue. Although it is unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will continue to reach and exceed human performance on more and more tasks.

AI-driven automation will continue to create wealth and expand the American economy in the coming years, but, while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy. Aggressive policy action will be needed to help Americans who are disadvantaged by these changes and to ensure that the enormous benefits of AI and automation are developed by and available to all.

Following up on the Administration’s previous report, Preparing for the Future of Artificial Intelligence, which was published in October 2016, this report further investigates the effects of AI-driven automation on the U.S. job market and economy, and outlines recommended policy responses.

This report was produced by a team from the Executive Office of the President including staff from the Council of Economic Advisers, Domestic Policy Council, National Economic Council, Office of Management and Budget, and Office of Science and Technology Policy. The analysis and recommendations included herein draw on insights learned over the course of the Future of AI Initiative, which was announced in May of 2016, and included Federal Government coordination efforts and cross- sector and public outreach on AI and related policy matters.

Beyond this report, more work remains, to further explore the policy implications of AI. Most notably, AI creates important opportunities in cyberdefense, and can improve systems to detect fraudulent transactions and messages.

 

Jason Furman – Chair, Council of Economic Advisers

John P. Holdren– Director, Office of Science and Technology Policy

Cecillia Munoz– Director, Domestic Policy Council

Megan Smith – U.S. Chief Technology Officer

Jeffrey Zients– Director, National Economic Council

 

You can reach the report from the link below: 

https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF


Alıntı için :

Selin Cetin
"Artificial Intelligence, Automation and the Economy"
Hukuk & Robotik, Monday September 25th, 2017
https://robotic.legal/en/yapay-zeka-otomasyon-ekonomi-hakkinda-abd-hukumetinin-calismasi/- 05/03/2021