Smart Glasses and Data Protection

 

Smart Glasses and Data Protection

 

Brussels, January 2019

 

 

Executive Summary

Smart glasses are wearable computers with a mobile Internet connection that are worn like glasses or that are mounted on regular glasses. They allow to display information in the user’s view field and to capture information from the physical world using e.g. camera, microphone and GPS receiver for augmented-reality (AR) applications.

The initial release of Google’s smart glass gained significant attention worldwide and increased the popularity of those devices. While the target audience has been initially the business sector (e.g. logistics, training simulations, etc.) with unit prices of about EUR 1500, recently competitors such as Snap Inc. address a wider and younger audience with cheaper models for about EUR 150.

While smart glasses may be very useful tools in many fields of application (technical maintenance, education, construction, etc.), their use has been discussed controversially because they also are considered to yield a high potential to undermine the privacy of individuals, especially where not properly privacy-friendly designed. The data protection impact of recording videos of persons in public places has already been discussed in the context of CCTV and dashcams. The sensors may record environmental information including video streams of the users’ view field, audio recordings and localisation data. Furthermore, smart glasses may allow their users to process invisible personal data of others, such as device identifiers, that devices emit regularly in form of Wi-Fi or Bluetooth radio signals. These data may not only contain personal data of the users, but also of individuals in their proximity (non- users). Applications of the smart glasses may process recorded data locally or remotely by third parties after an automated transfer via the Internet. Especially when smart glasses are used in densely populated public areas, existing safeguards to inform data subjects by means of acoustic or visual indicators (LEDs) are not efficient. Smart glasses may also leak personal data of their users to their environment. Depending on the smart glass design, non-users may also watch the smart glass display, which may contain personal data such as private mails, pictures, etc. Like any other Internet connected device, smart glasses may suffer from security loopholes than can be actively exploited to steal data or run unauthorised software.

While smart glasses play so far only a marginal role in everyday life, experts estimate an important potential to increase productivity in the professional sector thanks to AR and the smart glasses initiatives of Facebook, Apple and Amazon will lead to an increasing adoption in the consumer market. Technological improvements in facial or voice recognition and battery life may allow for novel use cases of smart glasses in many sectors. For instance, in the law enforcement field, reports revealed in early 2018 that police officers employ smart glasses to match individuals (in crowds) against a database of criminal suspects using facial recognition. In this dynamic field, data protection authorities are challenged to keep pace with the rapid developments and provide guidelines. Indeed, many aspects have been covered already in the WP 29 Opinion on the Internet of Things.

With the GDPR, a harmonised set of principles and a system of tools have been provided, first and foremost for the controllers, processors and developers of smart glasses to assess and control their impact on data protection and privacy. At the current stage of the development, an urgent need for technology specific legislative initiatives does not appear to be justified. However, the development of smart glasses and similar connected recording devices underlines the need to establish a robust framework for privacy and electronic communications, as proposed with the ePrivacy Regulation.

 

You can find the full report and the link below:

https://edps.europa.eu/sites/edp/files/publication/19-01-18_edps-tech-report-1-smart_glasses_en.pdf

AI Now Report 2018

AI Now Report 2018

 

December 2018

 

RECOMMENDATIONS

 

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation. We need a sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specific approaches include the United States Federal Aviation Administration and the National Highway Traffic Safety Administration.
  1. Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.
  1. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their AI infrastructures can be understood from “nose to tail,” including their ultimate application and use.
  1. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements.
  1. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest.
  1. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientific evidence to back these promises is inadequate and the longer-term consequences are unknown.
  1. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI field as a whole have focused on the “pipeline model,” looking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in their workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with transparency measures about hiring and retention.
  1. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.
  1. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploitation, and community-based infrastructures that enable public participation.
  1. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

You can find  the full report from the link below:

https://ainowinstitute.org/AI_Now_2018_Report.pdf

Artificial Intelligence and Life in 2030

 

Artificial Intelligence and Life in 2030

 

Executive Summary

 

Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite di erently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been signi cant advances since the field’s inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.

While impressive, these technologies are highly tailored to particular tasks. Each application typically requires years of specialized research and careful, unique construction. In similarly targeted applications, substantial increases in the future uses of AI technologies, including more self-driving cars, healthcare diagnostics and targeted treatments, and physical assistance for elder care can be expected. AI and robotics will also be applied across the globe in industries struggling to attract younger workers, such as agriculture, food processing, ful llment centers, and factories. They will facilitate delivery of online purchases through ying drones, self-driving trucks, or robots that can get up the stairs to the front door.

This report is the first in a series to be issued at regular intervals as a part of the One Hundred Year Study on Artificial Intelligence (AI100). Starting from a charge given by the AI100 Standing Committee to consider the likely influences of AI in a typical North American city by the year 2030, the 2015 Study Panel, comprising experts in AI and other relevant areas focused their attention on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fteen years and anticipates developments in the coming fteen years. Though drawing from a common source of research, each domain reflects different AI influences and challenges, such as the difficulty of creating safe and reliable hardware (transportation and service robots), the difficulty of smoothly interacting with human experts (healthcare and education), the challenge of gaining public trust (low-resource communities and public safety and security), the challenge of overcoming fears of marginalizing humans (employment and workplace), and the social and societal risk of diminishing interpersonal interactions (entertainment). The report begins with a re ection on what constitutes Artificial Intelligence, and concludes with recommendations concerning AI-related policy. These recommendations include accruing technical expertise about AI in government and devoting more resources—and removing impediments—to research on the fairness, security, privacy, and societal impacts of AI systems.

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly. Application design and policy decisions made in the near term are likely to have long-lasting influences on the nature and directions of such developments, making it important for AI researchers, developers, social scientists, and policymakers to balance the imperative to innovate with mechanisms to ensure that AI’s economic and social bene ts are broadly shared across society. If society approaches these technologies primarily with fear and suspicion, missteps that slow AI’s development or drive it underground will result, impeding important work on ensuring the safety and reliability of AI technologies. On the other hand, if society approaches AI with a more open mind, the technologies emerging from the eld could profoundly transform society for the better in the coming decades.

 

Peter Stone, University of Texas at Austin, Chair
Rodney Brooks, Rethink Robotics
Erik Brynjolfsson, Massachussets Institute of Technology
Ryan Calo, University of Washington
Oren Etzioni, Allen Institute for AI
Greg Hager, Johns Hopkins University
Julia Hirschberg, Columbia University
Shivaram Kalyanakrishnan, Indian Institute of Technology Bombay
Ece Kamar, Microsoft Research
Sarit Kraus, Bar Ilan University
Kevin Leyton-Brown, University of British Columbia
David Parkes, Harvard University
William Press, University of Texas at Austin
AnnaLee (Anno) Saxenian, University of California, Berkeley
Julie Shah, Massachussets Institute of Technology
Milind Tambe, University of Southern California
Astro Teller, X

 

You can find the link below:

https://ai100.stanford.edu/2016-report

AI at Google: Our Principles

AI at Google: Our Principles

Related image

Sundar Pichai

 

Sundar Pichai

CEO

June 7, 2018

 

 

 

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

  1. Be socially beneficial. 

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

  1. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

  1. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

  1. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

  1. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

  1. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

  1. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

 

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

 

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the long term

While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.

 

You can reach the original text below:

https://blog.google/topics/ai/ai-principles/

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Privacy and Freedom of Expression In the Age of Artificial Intelligence

 

Related image

Article 19 & Privacy International

April, 2018

 

Executive Summary

“Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people Access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data.

AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights.

While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information.

This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

  1. Present key technical definitions to clarify the debate;
  2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;
  3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and
  4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities.

We believe that policy and technology responses in this area must:

  • Ensure protection of human rights, in particular the right to freedom of expression and the right to privacy;
  • Ensure accountability and transparency of AI;
  • Encourage governments to review the adequacy of any legal and policy frameworks, and regulations on AI with regard to the protection of freedom of expression and privacy;
  • Be informed by a holistic understanding of the impact of the technology: case studies and empirical research on the impact of AI on human rights must be collected; and
  • Be developed in collaboration with a broad range of stakeholders, including civil society and expert networks.”

 

You can find the link and original report below:

https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

Artificial Intelligence for Europe

 

Artificial Intelligence for Europe

 

Image result for EU AND AI

25.4.2018/Brussels

 

INTRODUCTION – Embracing Change-

Artificial intelligence (AI) is already part of our lives – it is not science fiction. From using a virtual personal assistant to organise our working day, to travelling in a self-driving vehicle, to our phones suggesting songs or restaurants that we might like, AI is a reality.

Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats.

In Denmark, AI is helping save lives by allowing emergency services to diagnose cardiac arrests or other conditions based on the sound of a caller’s voice. In Austria, it is helping radiologists detect tumours more accurately by instantly comparing xrays with a large amount of other medical data.

Many farms across Europe are already using AI to monitor the movement, temperature and feed consumption of their animals. The AI system can then automatically adapt the heating and feding machinery to help farmers monitor their animals’ welfare and to free them up for other tasks. And AI is also helping European manufacturers to become more efficient and to help factories return to Europe.

These are some of the many examples of what we know AI can do across all sectors, from energy to education, from financial services to construction. Countless more examples that cannot be imagined today will emerge over the next decade.

Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry.Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century. The stakes could not be higher. The way we approach AI will define the world we live in. Amid fierce global competition, a solid European framework is needed.

The European Union (EU) should have a coordinated approach to make the most of theopportunities offered by AI and to address the new challenges that it brings. The EU can lead the way in developing and using AI for good and for all, building on its values and its strengths. It can capitalise on:

– world-class researchers, labs and startups. The EU is also strong in robotics and has world-leading industry, notably in the transport, healthcare and manufacturing sectors that should be at the forefront of AI adoption;

the Digital Single Market. Common rules, for example on data protection and the free flow of data in the EU, cybersecurity and connectivity help companies to do business, scale up across borders and encourage investments; and

– a wealth of industrial, research and public sector data which can be unlocked to feed AI systems. In parallel to this Communication, the Commission is taking action to make data sharing easier and to open up more data – the raw material for AI – for re-use. This includes data from the public sector in particular, such as on public utilities and the environment, as well as research and health data.

European leaders have put AI at the top of their agendas. On 10 April 2018, 24 Member States and Norway committed to working together on AI. Building on this strong political endorsement, it is time to make significant efforts to ensure that:

Europe is competitive in the AI landscape, with bold investments that match its economic weight. This is about supporting research and innovation to develop the next generation of AI technologies, and deployment to ensure that companies – in particular small and medium-sized enterprises which make up 99% of business in the EU – are able to adopt AI.

– No one is left behind in the digital transformation. AI is changing the nature of work: jobs will be created, others will disappear, most will be transformed. Modernisation of education, at all levels, should be a priority for governments. All Europeans should have every opportunity to acquire the skills they need. Talent should be nurtured, gender balance and diversity encouraged.

New technologies are based on values. The General Data Protection Regulation will become a reality on 25 May 2018. It is a major step for building trust, essential in the long term for both people and companies. This is where the EU’s sustainable approach to technologies creates a competitive edge, by embracing change on the basis of the Union’s Values. As with any transformative technology, some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making. The EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency. The EU is also well placed to lead this debate on the global stage.

This is how the EU can make a difference – and be the champion of an approach to AI that benefits people and society as a whole.

 

You can reach the link and original report below:

https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe

 

EU Artificial Intelligence Declaration

EU Artificial Intelligence Declaration

 

Image result for eu

April 10, 2018 / Brussels

 

In October 2017, the European Council asked the Commission to present a European approach to Artificial Intelligence (AI). The Commission has announced that it will adopt a Communication on AI in April 2018.

This Declaration builds on the achievements and investments of Europe in AI as well as the progress towards the creation of a Digital Single Market.

The participating Member States agree to cooperate on:

  • Boosting Europe’s technology and industrial capacity in AI and itsuptake, including better access to public sector data; these are essential conditions to influence AI development, fuelling innovative business models and creating economic growth and new qualified jobs;

 

  • Addressing socio-economic challenges, such as the transformation of the labour markets and modernising Europe’s education and training systems, including upskilling & reskilling EU citizens;

 

  • Ensuring an adequate legal and ethical framework, building on EU fundamental rights and values, including privacy and protection of personal data, as well as principles such as transparency and accountability.

In particular, the participating Member States agree to:

  • Work towards a comprehensive and integrated European approach on AI to increase the EU’s competitiveness, attractiveness and excellence in R&D in AI, and where needed review and modernise national policies to ensure that the opportunities arising from AI are seized and the emerging challenges are addressed.

 

  • Encourage discussions with stakeholders on AI and support the development of a broad and diverse community of stakeholders in a “European AI Alliance” in order to build awareness and foster the development of AI in a manner that maximizes its benefit to economy and society.

 

  • Consider the allocation of R&D&I funding to the further development and deployment of AI, including on disruptive innovation and applications, as a matter of priority.

 

  • Exchange views with other Member States and the Commission on AI research agendas and strategies to create synergies in relevant R&D&I funding schemes across Europe.

 

  • Cooperate on reinforcing AI research centres and supporting their pan-European dimension. Contribute to the establishment of a dense network of Digital Innovation Hubs at European level.

 

  • Contribute to efforts to make AI available and beneficial to government administrations and to all companies, in particular SMEs and companies from non-technological sectors.

 

  • Exchange best practices on procuring and using AI in government administrations of any size and at any level, and in the public sector more generally.

 

  • Contribute to efforts to make more public sector data available, support private businesses to follow this example, and improve the re-usability of scientific research data resulting from public funding, without prejudice to existing rights, regulations and contractual freedom.

 

  • Exchange views on ethical and legal frameworks related to AI in order to ensure responsible AI deployment.

 

  • Contribute to the sustainability and trustworthiness of AI-based solutions, for instance by working towards improved information security, promoting safety and vigilance in the design and implementation, and increasing accountability of AI systems.

 

  • Ensure that humans remain at the centre of the development, deployment and decision-making of AI, prevent the harmful creation and use of AI applications, and advance public understanding of AI.

 

  • Exchange views on the impact of AI on labour markets and discuss best practices on how to mitigate such impacts, including on the adoption of measures in education and practical training on skills to be acquired to allow citizens to benefit from AI and ensure social stability.

 

  • Engage in a continuous dialogue with the Commission on AI.

 

The signatories of this declaration commit to a regular assessment of the achievements and progress made on the matters agreed above and on the adoption of the appropriate measures in order to adequately react to the emerging evolution of AI and the opportunities and challenges related thereto.

 

You can reach the original text from below:

 

Breakthrough Technologies – Robotics, Innovation and Intellectual Property

Breakthrough Technologies – Robotics, Innovation and Intellectual Property

 

 

 

Abstract

“Robotics technology and the increasing sophistication of artificial intelligence are breakthrough innovations with significant growth prospects and the potential to disrupt existing economic and social facets of everyday life. Few studies have analyzed the developments of robotics innovation. This paper closes this gap by analyzing how innovation in robotics is taking place, how it diffuses, and what role intellectual property (IP) plays. The paper finds that robotics clusters are mainly located in the US, Europe, but increasingly also in the Republic of Korea and China. The robotics innovation ecosystem builds on cooperative networks of actors, including individuals, research institutions, and firms. Governments play a significant role in supporting robotics innovation, in particular through funding, military demand, and national robotics strategies. Robotics competitions and prizes provide for an important incentive to innovation.

Patents are used to exclude third parties, to secure freedom to operate, to license technologies and to avoid litigation. The countries with the highest number of filings are Japan, China, Republic of Korea and the US. The growing stock of patents owned by universities and PROs, in particular in China, is noteworthy too. Automotive and electronics companies are still the largest patent filers, but new actors in fields such as medical technologies and the Internet are emerging.

Secrecy is often used as a tool to appropriate innovation. Copyright protection is relevant to robotics too, mainly in its role in protecting software, and more recently in protecting so-called Netlists. Finally, proprietary approaches co-exist with open-source robotics platforms which are developing rapidly in robotics clusters.”

 

You can reach all of the report from the link below:

http://www.wipo.int/edocs/pubdocs/en/wipo_pub_econstat_wp_30.pdf

Artificial Intelligence- The Public Policy Opportunity

Artificial Intelligence- The Public Policy Opportunity

 

ıntel and AI ile ilgili görsel sonucu

 2017, Intel Corporation

 

Intel and Artificial Intelligence

“Intel powers the cloud and billions of smart, connected computing devices. Due to the decreasing cost of computing enabled by Moore’s Law1 and the increasing availability of connectivity, these connected devices are now generating millions of terabytes of data every day. Recent breakthroughs in computer and data science give us the ability to timely analyze and derive immense value from that data. As Intel distributes the computing capability of the data center across the entire global network, the impact of artificial intelligence is significantly increasing. Artificial intelligence is creating an opportunity to drive a new wave of economic progress while solving some of the world’s most difficult problems. This is the artificial intelligence (AI) opportunity. To allow AI to realize its potential, governments need to create a public policy environment that fosters AI innovation, while also mitigating unintended societal consequences. This document presents Intel’s AI public policy recommendations.”

 

 

You can reach all of the report from the link below:

https://blogs.intel.com/policy/files/2017/10/Intel-Artificial-Intelligence-Public-Policy-White-Paper-2017.pdf

Automated Individual Decision-Making And Profiling

Automated Individual Decision-Making And Profiling

 

Ä°lgili resim

ARTICLE 29 DATA PROTECTION WORKING PARTY

3 October 2017       17/EN WP 251 

 

 

INTRODUCTION

The General Data Protection Regulation (the GDPR), specifically addresses profiling and automated individual decision-making, including profiling.

Profiling and automated decision-making are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

The widespread availability of personal data on the internet and from Internet of Things (IoT) devices, and the ability to find correlations and create links, can allow aspects of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society as a whole, delivering benefits such as: increased efficiencies; and resource savings.

They have many commercial applications, for example, they can be used to better segment markets and tailor services and products to align with individual needs. Medicine, education, healthcare and transportation can also all benefit from these processes.

However, profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.

These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved.

Profiling can perpetuate existing stereotypes and social segregation. It can also lock a person into a specific category and restrict them to their suggested preferences. This can undermine their freedom to choose, for example, certain products or services such as books, music or newsfeeds. It can lead to inaccurate predictions, denial of services and goods and unjustified discrimination in some cases.

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy. The purpose of these guidelines is to clarify those provisions.

This document covers:

-Definitions of profiling and automated decision-making and the GDPR approach to these in general – Chapter II

-Specific provisions on automated decision-making as defined in Article 22 – Chapter III

-General provisions on profiling and automated decision-making – Chapter IV

-Children and profiling – Chapter V

-Data protection impact assessments – Chapter VI

 

The Annexes provide best practice recommendations, building on the experience gained in EU Member States.

 

 

You can reach the guidelines from the link below:

http://ec.europa.eu/newsroom/document.cfm?doc_id=47742