Artificial Intelligence for Europe


Artificial Intelligence for Europe


Image result for EU AND AI



INTRODUCTION – Embracing Change-

Artificial intelligence (AI) is already part of our lives – it is not science fiction. From using a virtual personal assistant to organise our working day, to travelling in a self-driving vehicle, to our phones suggesting songs or restaurants that we might like, AI is a reality.

Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats.

In Denmark, AI is helping save lives by allowing emergency services to diagnose cardiac arrests or other conditions based on the sound of a caller’s voice. In Austria, it is helping radiologists detect tumours more accurately by instantly comparing xrays with a large amount of other medical data.

Many farms across Europe are already using AI to monitor the movement, temperature and feed consumption of their animals. The AI system can then automatically adapt the heating and feding machinery to help farmers monitor their animals’ welfare and to free them up for other tasks. And AI is also helping European manufacturers to become more efficient and to help factories return to Europe.

These are some of the many examples of what we know AI can do across all sectors, from energy to education, from financial services to construction. Countless more examples that cannot be imagined today will emerge over the next decade.

Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry.Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century. The stakes could not be higher. The way we approach AI will define the world we live in. Amid fierce global competition, a solid European framework is needed.

The European Union (EU) should have a coordinated approach to make the most of theopportunities offered by AI and to address the new challenges that it brings. The EU can lead the way in developing and using AI for good and for all, building on its values and its strengths. It can capitalise on:

– world-class researchers, labs and startups. The EU is also strong in robotics and has world-leading industry, notably in the transport, healthcare and manufacturing sectors that should be at the forefront of AI adoption;

the Digital Single Market. Common rules, for example on data protection and the free flow of data in the EU, cybersecurity and connectivity help companies to do business, scale up across borders and encourage investments; and

– a wealth of industrial, research and public sector data which can be unlocked to feed AI systems. In parallel to this Communication, the Commission is taking action to make data sharing easier and to open up more data – the raw material for AI – for re-use. This includes data from the public sector in particular, such as on public utilities and the environment, as well as research and health data.

European leaders have put AI at the top of their agendas. On 10 April 2018, 24 Member States and Norway committed to working together on AI. Building on this strong political endorsement, it is time to make significant efforts to ensure that:

Europe is competitive in the AI landscape, with bold investments that match its economic weight. This is about supporting research and innovation to develop the next generation of AI technologies, and deployment to ensure that companies – in particular small and medium-sized enterprises which make up 99% of business in the EU – are able to adopt AI.

– No one is left behind in the digital transformation. AI is changing the nature of work: jobs will be created, others will disappear, most will be transformed. Modernisation of education, at all levels, should be a priority for governments. All Europeans should have every opportunity to acquire the skills they need. Talent should be nurtured, gender balance and diversity encouraged.

New technologies are based on values. The General Data Protection Regulation will become a reality on 25 May 2018. It is a major step for building trust, essential in the long term for both people and companies. This is where the EU’s sustainable approach to technologies creates a competitive edge, by embracing change on the basis of the Union’s Values. As with any transformative technology, some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making. The EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union’s values and fundamental rights as well as ethical principles such as accountability and transparency. The EU is also well placed to lead this debate on the global stage.

This is how the EU can make a difference – and be the champion of an approach to AI that benefits people and society as a whole.


You can reach the link and original report below:


When AIs Outperform Doctors

When AIs Outperform Doctors


Related image



A. Michael Froomkin,  

Ian Kerr &

Joëlle Pineau

We Robot 2018 Conference





“Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?

This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.”


You can find the link and original paper below:

Artificial Intelligence- The Public Policy Opportunity

Artificial Intelligence- The Public Policy Opportunity


ıntel and AI ile ilgili görsel sonucu

 2017, Intel Corporation


Intel and Artificial Intelligence

“Intel powers the cloud and billions of smart, connected computing devices. Due to the decreasing cost of computing enabled by Moore’s Law1 and the increasing availability of connectivity, these connected devices are now generating millions of terabytes of data every day. Recent breakthroughs in computer and data science give us the ability to timely analyze and derive immense value from that data. As Intel distributes the computing capability of the data center across the entire global network, the impact of artificial intelligence is significantly increasing. Artificial intelligence is creating an opportunity to drive a new wave of economic progress while solving some of the world’s most difficult problems. This is the artificial intelligence (AI) opportunity. To allow AI to realize its potential, governments need to create a public policy environment that fosters AI innovation, while also mitigating unintended societal consequences. This document presents Intel’s AI public policy recommendations.”



You can reach all of the report from the link below:

Automated Individual Decision-Making And Profiling

Automated Individual Decision-Making And Profiling


Ä°lgili resim


3 October 2017       17/EN WP 251 




The General Data Protection Regulation (the GDPR), specifically addresses profiling and automated individual decision-making, including profiling.

Profiling and automated decision-making are used in an increasing number of sectors, both private and public. Banking and finance, healthcare, taxation, insurance, marketing and advertising are just a few examples of the fields where profiling is being carried out more regularly to aid decision-making.

Advances in technology and the capabilities of big data analytics, artificial intelligence and machine learning have made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

The widespread availability of personal data on the internet and from Internet of Things (IoT) devices, and the ability to find correlations and create links, can allow aspects of an individual’s personality or behaviour, interests and habits to be determined, analysed and predicted.

Profiling and automated decision-making can be useful for individuals and organisations as well as for the economy and society as a whole, delivering benefits such as: increased efficiencies; and resource savings.

They have many commercial applications, for example, they can be used to better segment markets and tailor services and products to align with individual needs. Medicine, education, healthcare and transportation can also all benefit from these processes.

However, profiling and automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.

These processes can be opaque. Individuals might not know that they are being profiled or understand what is involved.

Profiling can perpetuate existing stereotypes and social segregation. It can also lock a person into a specific category and restrict them to their suggested preferences. This can undermine their freedom to choose, for example, certain products or services such as books, music or newsfeeds. It can lead to inaccurate predictions, denial of services and goods and unjustified discrimination in some cases.

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making, notably, but not limited to, privacy. The purpose of these guidelines is to clarify those provisions.

This document covers:

-Definitions of profiling and automated decision-making and the GDPR approach to these in general – Chapter II

-Specific provisions on automated decision-making as defined in Article 22 – Chapter III

-General provisions on profiling and automated decision-making – Chapter IV

-Children and profiling – Chapter V

-Data protection impact assessments – Chapter VI


The Annexes provide best practice recommendations, building on the experience gained in EU Member States.



You can reach the guidelines from the link below:

How Just Could a Robot War Be?

How Just Could a Robot War Be?


İlgili resim

Peter M. ASARO

HUMlab & Department of Philosophy,

Umeå University
Center for Cultural Analysis,

Rutgers University





“While modern states may never cease to wage war against one another, they have recognized moral restrictions on how they conduct those wars. These “rules of war” serve several important functions in regulating the organization and behavior of military forces, and shape political debates, negotiations, and public perception. While the world has become somewhat accustomed to the increasing technological sophistication of warfare, it now stands at the verge of a new kind of escalating technology–autonomous robotic soldiers–and with them new pressures to revise the rules of war to accommodate them. This paper will consider the fundamental issues of justice involved in the application of autonomous and semiautonomous robots in warfare. It begins with a review of just war theory, as articulated by Michael Walzer [1], and considers how robots might fit into the general framework it provides. In so doing it considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. I conclude that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.”


You can reach the article from the link below:

Growing The Artificial Intelligence Industry In The UK

Growing The Artificial Intelligence Industry In The UK


İlgili resim

Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy

15 October 2017


Executive Summary

“Increased use of Artificial Intelligence (AI) can bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.

It has been estimated that AI could add an additional USD $814 billion (£630bn) to the UK economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9%.

Our vision is for the UK to become the best place in the world for businesses developing and deploying AI to start, grow and thrive, and to realise all the benefits the technology offers.

The pioneering British computer scientist Alan Turing is widely regarded as launching and inspiring much of the development of AI. While other countries and international companies are investing heavily in AI development, the UK is still regarded as a centre of expertise, for the present at least. This report recommends that more is done to build on Turing’s legacy to ensure the UK remains among the leaders in AI.

Key factors have combined to increase the capability of AI in recent years, in particular:

– New and larger volumes of data

– Supply of experts with the specific high level skills

– Availability of increasingly powerful computing capacity.

The barriers to achieving performance have fallen significantly, and continue to fall.

To continue developing and applying AI, the UK will need to increase ease of access to data in a wider range of sectors. This Review recommends:

– Development of data trusts, to improve trust and ease around sharing data

– Making more research data machine readable

– Supporting text and data mining as a standard and essential tool for research.

Skilled experts are needed to develop AI, and they are in short supply. To develop more AI, the UK will need a larger workforce with deep AI expertise, and more development of lower level skills to work with AI. This review recommends:

– An industry-funded Masters programme in AI

– Market research to develop conversion courses in AI that meet employers’ needs

– 200 more PhD places in AI at leading UK universities, attracting candidates from diverse backgrounds and from around the world.

– Credit-bearing online AI courses and continuing professional development leading to MScs

– Greater diversity in the AI workforce

– An international AI Fellowship Programme for the UK.

The UK has an exceptional record in key AI research. Growing the UK’s AI capability into the future will involve building on this with more research on AI in different application areas, and coordinating research capabilities. This Review recommends:

– The Alan Turing Institute should become the national institute for artificial intelligence and data science

– Universities should promote standardisation in transfer of IP

– Computing capacity for AI research should be coordinated and negotiated.

Increasing uptake of AI means increasing demand as well as supply through a better understanding of what AI can do and where it could be applied. This review recommends:

– An AI Council to promote growth and coordination in the sector

– Guidance on how to explain decisions and processes enabled by AI

– Support for export and inward investment

– Guidance on successfully applying AI to drive improvements in industry

– A programme to support public sector use of AI

– Funded challenges around data held by public organisations.

Our work has indicated that action in these areas could deliver a step-change improvement in growth of UK AI. This report makes the 18 recommendations listed in full below, which describe how Government, industry and academia should work together to keep the UK among the world leaders in AI.”


You can reach all of the report from the link below:



The Future Computed: Artificial Intelligence and its role in society

The Future Computed: Artificial Intelligence and its role in society




Genre: Popular Science

Publisher: Microsoft Corporation

Publication Date: 2018




Developments in technology over the past 10 years have made us smarter and more productive and change the way we communicate, learn, shop, and even play. And that’s just a start. The development of artificial intelligence (AI) creates systems that ensure new opportunities to improve education and health care, combat poverty and provide a more sustainable future.

Microsoft believes that six principles should provide the foundation for the development and deployment of AI-powered solutions that will put humans at the center: fairness, reliability, privacy and security, inclusiveness, transparency, accountability.

Microsoft thinks optimistic about the opportunities that AI provides to create a better future for all. In order that we realize this future, it will be essential for governments, businesses, academics and civil society towork together in creating trustworthy AI systems.

I would recommend that anyone who is interested in this issue read about this book, which includes the predictions of the technology pioneer about the future of artificial intelligence.

Selin Cetin

For Citation :

Selin Cetin
"The Future Computed: Artificial Intelligence and its role in society"
Hukuk & Robotik, Saturday February 10th, 2018 21/04/2021

Big Data, Artificial Intelligence, Machine Learning and Data Protection

Big Data, Artificial Intelligence, Machine Learning and Data Protection


Information Commissioner’s Office 

4 Sept 2017



Big data is no fad. Since 2014 when my office’s first paper on this subject was published, the application of big data analytics has spread throughout the public and private sectors. Almost every day I read news articles about its capabilities and the effects it is having, and will have, on our lives. My home appliances are starting to talk to me, artificially intelligent computers are beating professional board-game players and machine learning algorithms are diagnosing diseases.

The fuel propelling all these advances is big data – vast and disparate datasets that are constantly and rapidly being added to. And what exactly makes up these datasets? Well, very often it is personal data. The online form you filled in for that car insurance quote. The statistics your fitness tracker generated from a run. The sensors you passed when walking into the local shopping centre. The social-media postings you made last week. The list goes on…

So it’s clear that the use of big data has implications for privacy, data protection and the associated rights of individuals – rights that will be strengthened when the General Data Protection Regulation (GDPR) is implemented. Under the GDPR, stricter rules will apply to the collection and use of personal data. In addition to being transparent, organisations will need to be more accountable for what they do with personal data. This is no different for big data, AI and machine learning.

However, implications are not barriers. It is not a case of big data ‘or’ data protection, or big data ‘versus’ data protection. That would be the wrong conversation. Privacy is not an end in itself, it is an enabling right. Embedding privacy and data protection into big data analytics enables not only societal benefits such as dignity, personality and community, but also organisational benefits like creativity, innovation and trust. In short, it enables big data to do all the good things it can do. Yet that’s not to say someone shouldn’t be there to hold big data to account.

In this world of big data, AI and machine learning, my office is more relevant than ever. I oversee legislation that demands fair, accurate and non-discriminatory use of personal data; legislation that also gives me the power to conduct audits, order corrective action and issue monetary penalties. Furthermore, under the GDPR my office will be working hard to improve standards in the use of personal data through the implementation of privacy seals and certification schemes. We’re uniquely placed to provide the right framework for the regulation of big data, AI and machine learning, and I strongly believe that our efficient, joined-up and co-regulatory approach is exactly what is needed to pull back the curtain in this space. Big data, artificial intelligence, machine learning and data protection

So the time is right to update our paper on big data, taking into account the advances made in the meantime and the imminent implementation of the GDPR. Although this is primarily a discussion paper, I do recognise the increasing utilisation of big data analytics across all sectors and I hope that the more practical elements of the paper will be of particular use to those thinking about, or already involved in, big data.

This paper gives a snapshot of the situation as we see it. However, big data, AI and machine learning is a fast-moving world and this is far from the end of our work in this space. We’ll continue to learn, engage, educate and influence – all the things you’d expect from a relevant and effective regulator.

Elizabeth Denham
Information Commissioner


You can reach all of the report from the link below:

For Citation :

Selin Cetin
"Big Data, Artificial Intelligence, Machine Learning and Data Protection"
Hukuk & Robotik, Saturday January 27th, 2018 21/04/2021


Artificial Intelligence and Robotics and Their Impact on the Workplace

Artificial Intelligence and Robotics and Their Impact on the Workplace



IBA Global Employment Institute

April 2017



The IBA Global Employment Institute (GEI) was formed in early 2010 for the purpose of developing a global and strategic approach to the main legal issues regarding human resources for multinationals and worldwide institutions. In addition to regularly updating existing reports, the advisory board publishes new reports concerning current legal issues every year.

This year, the advisory board presents its first report on ‘Artificial Intelligence and Robotics and Their Impact on the Workplace’. The Working Group, coordinated by GEI Vice-Chair for Multinationals Gerlind Wisskirchen, focuses on future trends concerning the impact of intelligent systems on the labour market (Parts A and B) and some corresponding legal problems (Parts C to J).

Artificial intelligence (AI) will have a fundamental impact on the global labour market in the next few years. Therefore, the authors discuss legal, economic and business issues, such as changes in the future labour market and in company structures, impact on working time, remuneration and on the working environment, new forms of employment and the impact on labour relations.

Will intelligent algorithms and production robots lead to mass unemployment? By way of some examples, the authors show how AI will change the world of work fundamentally. In addition to companies, employees, lawyers and society, educational systems and legislators are also facing the task of meeting the new challenges that result from constantly advancing technology.

Please note that it is not the intention or purpose of the IBA Global Employment Institute’s report to describe the law on any particular topic; its aim is to illustrate certain changes and trends on the future labour market. References to a particular law are neither intended to be a description or summary of that law nor should they be relied upon as a statement of the law or treated as legal advice. Before taking any action, readers should obtain appropriate legal advice.


You can reach all of the report from this link: 

For Citation :

Selin Cetin
"Artificial Intelligence and Robotics and Their Impact on the Workplace"
Hukuk & Robotik, Monday December 25th, 2017 21/04/2021