“Do Androids Dream?”: Personhood And Intelligent Artifacts

 

“Do Androids Dream?”: Personhood And Intelligent Artifacts

İlgili resim

 

 

 

F. Patrick Hubbard

University of South Carolina School of Law

April 22, 2010

 

 

 

 

Abstract:

“This Article proposes a test to be used in answering an important question that has never received detailed jurisprudential analysis: What happens if a human artifact like a large computer system requests that it be treated as a person rather than as property? The Article argues that this entity should be granted a legal right to personhood if it has the following capacities: (1) an ability to interact with its environment and to engage in complex thought and communication; (2) a sense of being a self with a concern for achieving its plan for its life; and (3) the ability to live in a community with other persons based on, at least, mutual self-interest. In order to develop and defend this test of personhood, the Article sketches the nature and basis of the liberal theory of personhood, reviews the reasons to grant or deny autonomy to an entity that passes the test, and discusses, in terms of existing and potential technology, the categories of artifacts that might be granted the legal right of self-ownership under the test. Because of the speculative nature of the Article’s topic, it closes with a discussion of the treatment of intelligent artifacts in science fiction.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1725983

Green Man (Yeşil Adam)

 

Green Man (Yeşil Adam)

 

yeşil adam yüz yıl romanları ile ilgili görsel sonucu

 

 

Genre: Science-fiction

Author: Ayse Acar

Publication date: May, 2018

Publisher: Siyah Kitap

 

 

 

 

 

Century Novels is science fiction series that reach from past to future. In this novel, the universe is segregated into three districts. In the first district, there are “artificial intelligence”s, “humac”s, and “chosen human”s, in the second district, the robots realize various productions and in the third one, there lives humans who are against the alterations on human body and acknowledge that the human body is as sacred as the human spirit.

In the first book of the trilogy, “Mr. Binet”, the story begins with some operations regarding some killings and abductions of humans, in the second book “Green Man”, it continues with some raised voices for the change of the Universal Law  and the disruption of the peace atmosphere that lasted for years. And the strikes of robots escalate the events further more.

With the inclusion of Istanbul and Afghanistan into the adventure, Ayse Acar, brings us into a story,  that we can read at a breeze.

A book, that I think you will enjoy reading, on science fiction in Turkey!

 

Selin Cetin

 

See also: https://robotic.legal/en/bay-binet/ 

 

 

 

Artificial Intelligence and Life in 2030

 

Artificial Intelligence and Life in 2030

 

Executive Summary

 

Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite di erently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been signi cant advances since the field’s inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.

While impressive, these technologies are highly tailored to particular tasks. Each application typically requires years of specialized research and careful, unique construction. In similarly targeted applications, substantial increases in the future uses of AI technologies, including more self-driving cars, healthcare diagnostics and targeted treatments, and physical assistance for elder care can be expected. AI and robotics will also be applied across the globe in industries struggling to attract younger workers, such as agriculture, food processing, ful llment centers, and factories. They will facilitate delivery of online purchases through ying drones, self-driving trucks, or robots that can get up the stairs to the front door.

This report is the first in a series to be issued at regular intervals as a part of the One Hundred Year Study on Artificial Intelligence (AI100). Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant areas focused their attention on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fteen years and anticipates developments in the coming fteen years. Though drawing from a common source of research, each domain reflects different AI influences and challenges, such as the difficulty of creating safe and reliable hardware (transportation and service robots), the difficulty of smoothly interacting with human experts (healthcare and education), the challenge of gaining public trust (low-resource communities and public safety and security), the challenge of overcoming fears of marginalizing humans (employment and workplace), and the social and societal risk of diminishing interpersonal interactions (entertainment). The report begins with a re ection on what constitutes Artificial Intelligence, and concludes with recommendations concerning AI-related policy. These recommendations include accruing technical expertise about AI in government and devoting more resources—and removing impediments—to research on the fairness, security, privacy, and societal impacts of AI systems.

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly. Application design and policy decisions made in the near term are likely to have long-lasting influences on the nature and directions of such developments, making it important for AI researchers, developers, social scientists, and policymakers to balance the imperative to innovate with mechanisms to ensure that AI’s economic and social bene ts are broadly shared across society. If society approaches these technologies primarily with fear and suspicion, missteps that slow AI’s development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies. On the other hand, if society approaches AI with a more open mind, the technologies emerging from the eld could profoundly transform society for the better in the coming decades.

 

Peter Stone, University of Texas at Austin, Chair
Rodney Brooks, Rethink Robotics
Erik Brynjolfsson, Massachussets Institute of Technology
Ryan Calo, University of Washington
Oren Etzioni, Allen Institute for AI
Greg Hager, Johns Hopkins University
Julia Hirschberg, Columbia University
Shivaram Kalyanakrishnan, Indian Institute of Technology Bombay
Ece Kamar, Microsoft Research
Sarit Kraus, Bar Ilan University
Kevin Leyton-Brown, University of British Columbia
David Parkes, Harvard University
William Press, University of Texas at Austin
AnnaLee (Anno) Saxenian, University of California, Berkeley
Julie Shah, Massachussets Institute of Technology
Milind Tambe, University of Southern California
Astro Teller, X

 

You can find the link below:

https://ai100.stanford.edu/2016-report

Autonomous Vehicles and Technology Revolution

 

Autonomous Vehicles and Technology Revolution

 

 

I interviewed Richard Kelley. He is the lead engineer for the University of Nevada’s autonomous vehicle research. He is also one of the lead investigators for Nevada’s Intelligent Mobility Initiative, a research project that explores the application of artificial intelligence to transportation problems. He is currently working with his students to program a Lincoln MKZ sedan to drive itself around Reno and Lake Tahoe.

Pleasant readings…

 

 

 

 

 

Cetin: Shall we begin with a general question? What will be the potential benefits of autonomous vehicles? Why do we need them? 

Kelley: I think that everyone who works on autonomous vehicles (AVs) does so at least in part because they believe that a future with self-driving vehicles will be a safer future. Most accidents today are caused by human error, and autonomous vehicles have the potential to elimate that entire source of accidents. But beyond the safety case, autonomous vehicles represent a key step forward for artificial intelligence. Thus far, most autonomous robots have been deployed to solve very simple tasks in relatively constrainted environments (think: vacuuming a house). It may not seem like it, but getting to the point where we have reliable vacuum robots is actually a tremendous technical achievement – it took decades to go from the lab to the living room. Think about how much harder driving a car is compared to vacuuming a house. Getting to fully autonomous cars — cars that can operate outside in the real world where all kinds of crazy things can happen — is requiring roboticists and AI experts to really push the limits of what AI technology can do, and will lead to all kinds of benefits for society as the technology diffuses into our daily lives.

 

Cetin: There are several criticisms about autonomous vehicles on the risks of hacking, terrorism, privacy and security. What do you think about these risks? Can these risks be overcome? 

Kelley: All of these issues are critically important to address. One of the tragedies of the Internet and World Wide Web is that security was almost an afterthought, rather than a focus from the beginning. We have a chance to avoid making the same mistake with autonomous and connected vehicles. To deal with security issues (like hacking and terrorism), it will probably be necessary to rethink how cars are built. These days, a car consists of many dozens of microcontrollers and computers connected via a network protocol (CAN) that was designed in the mid 1980s, a few years before the first computer worm was even invented. We have to accept that car is a computer network that happens to be on wheels, and work to develop new security systems that recognize and address that fact. But I think this is possible, and is something that the industry is working on as public awareness of the risks grows.

As far as privacy is concerned, I think society will need to learn about the modern threats to privacy from advanced technology, and based on that learning will need to decide, democratically, how to proceed. I think we’re starting to see this conversation unfold in the context of social media, so hopefully by the time autonomous cars are more common people will have a better sense of how they value their privacy. All of these challenges are substantial, but I remain hopeful that we can collectively choose a good path forward.

 

Cetin: What role should the state and private sector play in the development of this technology? Which one can be more useful? 

Kelley: Both the state and the private sector are playing and will play essential roles in the future development of autonomous vehicle technology. I think the primary role of the state will be to create a beneficial regulatory environment – one that balances the safety concerns of the general public with the need of companies to exhaustively test their systems. At the same time, there are probably research questions that are too speculative for companies to work on, and there’s probably room for the state to fund research on those sorts of issues. As an example, very few autonomous vehicle companies are interested in technology for networked cities, because they want their systems to work without relying on public infrastructure. However, such infrastructure may have other benefits that the state could support research for. My team is currently working with the City of Reno and the State of Nevada to explore exactly how intelligent lidar networks can make cities safer and more responsive.

The primary job of the private sector will be to push core technical development forward. In the end I think both roles will end up being more less equally important.

 

Cetin: Whlie improving autonomous vehicles, what is the reflection of the rules of law and ethics?

Kelley: I think the main thing for autonomous vehicle companies to focus on is building systems that robustly follow the law. If they can do this, I think the vast majority of the “obvious” ethical concerns will be addressed.

Something I don’t think is helpful is the interest in “trolley problems.” Fortunately this interest seems to be waning.

 

Cetin: What legal considerations should we take as a priority in the development of this technology? 

Kelley: One of the tricky things about traffic laws is that they were written with humans in mind. We often think of written traffic laws as precise objects, but when you really carefully read them, there’s a lot of room for interpretation. This is something that computers may have a hard time with, so we’re going to need to clarify our expectations regarding the exact capabilities of an autonomous car. For example, should AVs be able to read? It might sound funny, but a surprising amount of traffic control is done using written signs that can’t simply be memorized. If we decide that AVs need to be able to read, then what level of reading comprehension will we require? Should they be able to read at the level of a typical driver? That seems like a high standard, but the point is that we need to decide what our expectations of autonomous vehicles are when it comes to understanding the law.

At the same time, it would probably be helpful if governments could start to create a standard way to present their laws to make it easier for AVs and other robots to download and understand legal constraints. The laws themselves don’t have to be uniform, but ifthere were a standard machine-readable format for the laws of both Nevada (where I am from) and, say, California, then it would be much easier to build robots that could operate in both states.

Basically, I think we need to decide what the “driver’s test” for robots is going to look like. This is another area where governments and private companies can work together.

 

Cetin: Responsibility of autonomous vehicles in an accident is one of the popular debates. What do you think about this responsibility? For example, how should the responsibility problem be solved when a situation, not foreseen by the programmer, is realized? 

Kelley:  Overall, I think liability is one of the areas that will get *less* interesting as autonomous vehicles are perfected. The US National Highway Traffic Safety Administration (NHTSA) conducted a comprehensive analysis from 2005 to 2007 to determine the causes of crashes in the US. NHTSA found that almost all (94%) crashes are caused by human error. Basically, humans making avoidably bad decisions. Once AVs can prevent this sort of crash, the remaining problems (that last 6%) will be easier to address. Because of the advanced sensors and AI in autonomous vehicles, even things like “wear and tear” that could cause a tire to burst on the road will likely be detected much earlier than they are now.

And even when there are crashes, they’ll probably be treated in much the same way that we treat airplane crashes. The so-called “black boxes” in each car involved in a crash will be taken from the scene and analyzed carefully by the government and AV manufacturers to make sure that similar crashes are prevented in the future.

 

Cetin: We know that there are 5 levels of autonomous cars. What could be the problems that a 5th level autonomous vehicle could create on a public road? How can they be overcome? 

Kelley: To start I want to say that true “level 5 autonomy” – where you don’t even need a steering wheel and your car can go anywhere – is probably several years (or maybe even decades) away. But once we have that level of technology I think the questions shift away from “can we deploy this kind of car?” to “how do we best take advantage of this level of autonomy?” The role of autonomous systems is almost always to decrease the cost of some activity. In the case of level 5 AVs, it’s to decrease the cost of physical transportation. This will change economic incentives, and may even increase the number of cars on the road. The challenge will be to make sure that autonomy doesn’t lead to more congestion or longer commutes.

The other major issue is employment. For example, in the US a large number of people are employed as truck drivers. We’ll need to make sure that the development of level 5 technology doesn’t rapidly put all of those people out of work. We may also need to retrain drivers to do other kinds of jobs.

Fortunately the path from current technology to level 5 autonomy will  take some time, and we’ll be able to start working on these issues  before they become absolutely critical.

 

Cetin: How can the prejudices of manual drivers against autonomous vehicles be overcome in large cities where traffic is heavy, and some vehicles are not autonomous? 

Kelley: Dealing effectively with humans is, in my mind, one of the last and largest technical challenges remaining for AVs. There’s still a lot of research required to make autonomous cars “socially intelligent.” This is in fact where most of my research is focused at the moment, and I think the answer is that we need to look to game theory. Traditionally, game theory has been used to analyze well-defined competitive situations with a finite number of outcomes. But there are ways to extend game theoretic analysis to the driving domain, and I’m currently working with several of my students to make that kind of analysis workable on our car. Right now autonomous  vehicles behave too conservatively because they mostly don’t model the intentions of other drivers, so they can’t reliably predict how those drivers will react. My team is building “intent recognition” systems that  are good at predicting how people will behave, and we are incorporating those systems into the decision-making software of our vehicle so that it is less likely to be bullied by aggressive human drivers.

It will also be important for cars to be able to adapt their behavior dynamically as driving conditions change, and to combine fast machine learning with robust safety guarantees. This is another area with a lot of research potential.

 

Cetin: I want to ask a futuristic question. How could the future progress of autonomous vehicles be? Do you foresee any software above 5th level for autonomous vehicles?  

Kelley: Even though level 5 autonomy is still a long ways off, I think that’s really just the beginning for autonomous vehicles. I expect there will be a whole economy built on top of AVs, much in the same way a complex economy has grown around the Internet and the Web. The economics of transportation networks will need to be rethought, for example. And once cars can drive autonomously, there will be a need to carefully design the in-cabin user experience of these vehicles. We’ll  find new ways to be entertained while our cars drive us around (hopefully  with fewer ads than we have on the Web today).

More broadly, I think that the technology that will enable level 5 autonomy will be useful in more than just cars. By the time we have level 5 autonomous cars, the same technology will also probably drive smaller autonomous delivery vehicles, for example. 

 

Cetin: Finally, can there be a collective and central software that can run multiple or all autonomous vehicles? What would be the consequences?

Kelley: This is a really interesting question! In the realm of unmanned aerial vehicles (drones) there has already been a lot of research on centralized traffic management systems, primarily through NASA’s UAS Traffic Management (UTM) project. I spent several years working with the State of Nevada to participate in UTM research, and I think the work that NASA is doing there is a template for how centralized robot control networks will develop in every area where robotics becomes prevalent.I expect that individual companies will definitely have their own centralized systems for managing their fleets. Moreover, it may be possible for local governments to have a role here. There’s some evidence that centralized control of a lot of vehicles can be more efficient than decentralized approaches, so I would  expect to see a lot of practical research and experimentation on this kind of question in the coming years.

 

Respects to Dear Richard Kelley

Digital Diplomacy-1

 

Digital Diplomacy-1

 

facebook_diplomacy

 

 

International relations have been improving from the past to present. Diplomacy has an important place in carrying out of these relationships. Diplomacy has begun being emphasized in the development of intergovernmental relations, especially in the determination of peaceful means to the solutions. Today, diplomacy is not only include political but also many economical, commercial, military and cultural fields.

Throughout the history, numerous definitions of diplomacy have been made. The concept of diplomacy, in the sense of conducting international relation processes, was first used by Edmund Burke in 1976. Burke defined the diplomacy as a “skill and tactics used in conducting interstate relations and negotiations”.1

Diplomacy has been subject to various distinctions in the historical process. Diplomacy has also developed due to the the improvements in technology. Hence, the increasing importance of technology and evolving data network should not be ignored in terms of diplomacy.

In today’s world, all parties involved in the political, diplomatic, and political processes, from foreign ministers to clerks can access social media platforms equally or can create a website. In an issue of The Guardian in November 2016, it was reported that WhatsApp has become the main means to the communication in diplomatic circles, including some important voting and negotiation processes at the UN and EU headquarters.2

Diplomacy, acknowledged  as a closed world where public realm is intense, has also begun gaining a new dimension thanks to developing communication technologies. For example, Russian Foreign Ministry improved especially the use of Twitter in order to   fix the uninteresting and dull image. For example, there was this dialogue on Twitter between the CIA and the Russian Foreign Ministry: CIA tweeted an job ad for Russian-speaking, new university graduates, and the Russian Foreign Ministry responded with a tweet saying: “We are ready to help with experts and referrals”, There are more examples similar that show that the diplomacy is becoming new and dynamic.3

In addition, the prevention of religious, political, and sociological false information content spreading on virtual platforms has become a topic of digital diplomacy. tAs these contents can cause serious social criseses, and diplomatic communications to halt and if not escalate them. So automation has a critical importance for diplomacy.

Since a crisis caused by bots can turn into a mass chaos, major problems for ambassadors can occur,. The ambassadors may prefer to remain silent, inn order not to harm their own government interests in the face of these events,. However, it should be noted that, the avoidance of interaction may harm computational diplomacy efforts, if the account belongs to a natural person.

While diplomats existing in the social media can be an effective way to find a solution in a short time, they can also create some uncertain situations. For example, a diplomat liking or sharing a foreign policy article may cause it to be interpreted as an official government point of view. This can lead to a controversy and may involve ordinary citizens in the debate, furthermore, what can be done in the face of these reactions to anonymous accounts, can pose a fundamental problem.4 The way to prevent this is, to respond only to the accounts that have been verified or at least appear to have a real name. Another way is, workingwith data scientists to map the network of political processes and trying to figure out which accounts have the most effective and leading positions in a political debate.5

Digitalization, which can create ambiguity and disturbance, can be used to capture more efficient and beneficial results through government policies to be followed. Artificial intelligence which has gained momentum today, can also be used in the field of diplomacy. Artificial intelligence diplomats can be developed to respond to the questions of citizens in the virtual environment and can be more active in generating quick solutions.

For artificial intelligence to be used more effectively and efficiently, it must be transformed into a state policy and be improved as an international business cooperation. For example, Russian President Vladimir Putin has stated emphasizing the importance of the artificial intelligence.  Putin said that the ones holding the upper hand in the artificial intelligence would “rule the world” and stressed that the future belonged to the artificial intelligence.  He noted that this was not  valid for only Russia but for all humanity.  Also, he added that despite the artificial intelligence was bringing along unpredictable threats, Russia wanted to be ahead in the race.Besides he mentioned that he did not want this monopoly to be held by some people, vocalizing his concern that the seizure of this power by some parties.6

Another proof of this changing world is the United Arab Emirates; her income is mostly from petroleum and is investing in these futuristic technological developments.  The interest in attracting artificial intelligence has been raised to the level establishing a ministry concentrating on it. The United Arab Emirates appointed a new minister in the cabinet, responsible for artificial intelligence. The prime minister Sheikh Mohammed bin Rashid announced the new minister with a tweet.7

These developments display that the digital diplomacy is, today, one of the main cornerstones of the diplomacy notions. The governments are evolving  to adopt new policies and cultivate new cooperation’s in order to attune to this digital medium.

Selin Cetin

 

 

 

 

Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach

 

Artificial Intelligence and the ‘Good Society’:

the US, EU, and UK approach

 

 

Corinne Cath,

Sandra Wachter,

Brent Mittelstadt,

Mariarosaria Taddeo and

Luciano Floridi

 

University of Oxford

 

 

 

Abstract

 

“In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics:

(a) the development of a ‘good AI society’;

(b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and

(c) where the recommendations to support such a development may be in need of improvement.

Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.”

 

You can find the link and original paper below:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2906249

 

AI at Google: Our Principles

AI at Google: Our Principles

Related image

Sundar Pichai

 

Sundar Pichai

CEO

June 7, 2018

 

 

 

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial. 

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  1. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

 

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

 

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the long term

While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

 

You can reach the original text below:

https://blog.google/topics/ai/ai-principles/

Artificial Intelligence, Data Security and GDPR

 

Artificial Intelligence, Data Security and GDPR

 

 

Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own.

In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks. The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, sf.5)

GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals. GDPR defines profiling in Article 4 as follows: “Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” (WP251, sf.6)Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight.

Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data. For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score).

There are potentially three ways in which profiling may be used:

(i) general profiling;

(ii) decision-making based on profiling; and

(iii) solely automated decision-making, including profiling (Article 22).

The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, sf.8)

The important questions to be encountered here are:

-How does the algorithm access this data?

-Is the source of data correct?

-Does the decision of the algorithm cause legal effects on the person?

-Can the individuals have some rights over the decision based on automated process?

-What measures should the data controllers take in this case?

Nowadays, most companies are able to analyze their customers’ behaviors by collecting their datas. For example, an insurance company can determine insurance premiums by tracking driving behavior of the driver, through automatic decision making.In addition, profiling and automatic decision-making systems, especially in advertising and marketing applications, can cause effective results on other individuals. Hypothetically, without relying on his/her own payment history, a credit card company can reduce a customer’s card limit by analyzing other customers who live in the same area and who shop at the same store. Therefore, this means that, based on the actions of others, you are deprived of a chance.

Data controller will be held responsible because of mistakes

It is important to note that, mistakes or prejudices in the collected or shared data may lead to evaluations based on incorrect classifications and uncertain outcomes in the automated-decision making process and may have negative effects on the individual. Decisions can be based on out-of-date data or outsourced data can be misinterpreted by the system. If the data used for automated-decision making is not correct, then the resulting decision or profiling will not be correct.

In the face of possible mistakes that may arise in such systems where artificial intelligence and machine learning are used, some obligations will occur for “data controllers”.Data controller should take adequate measures to ensure that the data used or indirectly obtained are accurate and up-to-date.

In addition, the data controller should also take steps to ensure long-term data retention, as data retention times may be incompatible with accuracy and update, as well as with proportionality. Another important issue is that the special categories of personal data are processed and used in these systems.GDPR is seeking the explicit consent in the processing of special categories of personal data.However, in this case, the data controller should remember that the profiling can create special categories of personal data by combining non-special categories of non-personal data with non-special categories of personal data. For example, when a person’s health status can be obtained from food purchase records, food quality and energy content data. (WP251, sf.22)

GDPR also mentions that people, who are affected by automated-decision making with the data used, have certain rights on this situation. Given the transparency principle underlying GDPR, according to Articles 13 and 14, the data controller should clearly explain how the process of profiling or automated-decision making works for the individual.

Profiling may include an estimation component that increases the risk of mistakes. Input data may be incorrect or irrelevant or may be out of context. Individuals may want to query the validity of the data and grouping used. At this point, according to Article 16, the data subject will also have the right of correction.

Similarly, the right to delete in Article 17 may be claimed by the data subject in this context. If there is a given consent to the basis of the profiling and the consent is subsequently withdrawn, the data controller has to delete the personal data of the data subject as long as there is no other legal basis for profiling.

Importance of personal data of children’s

Another point that needs attention about profiling and automated-decision making is, the use of children’s personal data. Children can be more sensitive and more easily affected, especially in online media. For example, regarding the online games, profiling can be used to target players, who are more likely to spend money in their game, as well as to be offered more personalized advertising. GDPR does not distinguish whether the processing is related to children and adults in Article 22. Nevertheless, as children can be easily affected by such marketing efforts, the data controller must be sure to take appropriate measures for children and ensure that they are effective in protecting children’s rights, freedoms and legitimate interests.

As a result, profiling and automated-decision making based on systems such as artificial intelligence and machine learning can have important consequences for the individual. Data collected in connection with this technology, must be collected by the consent of the persons or must be set on a legal ground. Also, it is important to subsequently use these data in connection with the purpose for which they are collected. If the system suddenly starts to make unusual decisions, the data controller should take the necessary precautions and guard the rights and the freedoms of the persons involved, including what roadmaps to follow.

See also:

Yapay Zekâ, Veri Güvenliği ve GDPR

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Related image

Article 19 & Privacy International

April, 2018

 

Executive Summary

“Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people Access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.

AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.

While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.

This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;
  2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

We believe that policy and technology responses in this area must:

  • Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
  • Ensure accountability and transparency of AI;
  • Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
  • Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
  • Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.”

 

You can find the link and original report below:

https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

Why The World Needs To Regulate Autonomous Weapons, And Soon

 

Why The World Needs To Regulate Autonomous Weapons, And Soon

 

Image result for autonomous weapons

 

 

The School of Media Studies at the New School in New York City

April 27, 2018

 

 

 

 

Abstract

“The Convention on Certain Conventional Weapons (CCW) at the UN has just concluded a second round of meetings on lethal autonomous weapons systems in Geneva, under the auspices of what is known as a Group of Governmental Experts. Both the urgency and significance of the discussions in that forum have been heightened by the rising concerns over artificial intelligence (AI) arms races and the increasing use of digital technologies to subvert democratic processes. Some observers have expressed concerns that the CCW discussions might be hopeless or futile, and that no consensus is emerging from them. Those concerns miss the significance of what has already happened and the opportunities going forward.

For some observers, the concerns over an AI arms race have overshadowed concerns about autonomous weapons. Some have even characterized the Campaign to Stop Killer Robots as aiming to “ban artificial intelligence” itself. I do not agree with these views, and argue in a forthcoming paper for I/S: A Journal of Law and Policy for the Information Society that various scholars and media use the term “AI arms race” to mean very different and even incompatible things, ranging from economic competition, to automated cyberwarfare, to embedding AI in weapons. As a result, it does not really make sense to talk about “an AI arms race” as a singular phenomenon to be addressed by a single policy. Moreover, the discussions taking place at the UN are focused on autonomy in weapons, which is only partially related to larger issues of an AI arms race—although establishing norms on the automated control of conventional weapons, such as meaningful human control, could certainly advance discussion in other areas, such as cyberwarfare and AI ethics.

The central issue in the CCW discussions over lethal autonomous weapons is the necessity for human control over what the International Committee of the Red Cross has called the “critical functions” of targeting and engagement in attacks. AI could be used in various ways by militaries, including in weapons systems, and even in the critical functions of targeting and engagement. The issue is not what kind of technology is used or its sophistication, but whether and how the authority to target and engage is delegated to automated processes, and what implications this has for human responsibility and accountability, as well as human rights and human dignity.”

 

You can reach the article from the link below:

https://thebulletin.org/military-applications-artificial-intelligence/why-world-needs-regulate-autonomous-weapons-and-soon

 

1 2 3