What If We Could Fight Coronavirus with Artificial Intelligence?

 

What If We Could Fight Coronavirus with Artificial Intelligence?

 

European Parliamentary Research Service

2020

 

Analytics have changed the way disease outbreaks are tracked and managed, thereby saving lives. The international community is currently focused on the 2019-2020 novel coronavirus (COVID-19) outbreak, first identified in Wuhan, China. As it spreads, raising fears of a worldwide pandemic, international organisations and scientists are using artificial intelligence (AI) to track the epidemic in real-time, to effectively predict where the virus might appear next and develop an effective response.

On 31 December 2019, the World Health Organization (WHO) received the first report of a suspected novel coronavirus (COVID-19) in Wuhan. Amid concerns that the global response is fractured and uncoordinated, the WHO declared the outbreak a public health emergency of international concern (PHEIC) under the International Health Regulations (IHR) on 30 January 2020. Warnings about the novel coronavirus spreading beyond China were raised by AI systems more than a week before official information about the epidemic was released by international organisations. A health monitoring start-up correctly predicted the spread of COVID-19, using natural-language processing and machine learning. Decisions during such an outbreak need to be made on an urgent basis, often in the context of scientific uncertainty, fear, distrust, and social and institutional disruption. How can AI technologies be used to manage this type of global health emergency, without undermining protection of fundamental values and human rights?

Potential impacts and developments

In the case of COVID-19, AI has been used mostly to help detect whether people have novel coronavirus through the detection of visual signs of COVID-19 on images from lung CT scans; to monitor, in real time, changes in body temperature through the use of wearable sensors; and to provide an open-source data platform to track the spread of the disease. AI could process vast amounts of unstructured text data to predict the number of potential new cases by area and which types of populations will be most at risk, as well as evaluate and optimise strategies for controlling the spread of the epidemic. Other AI applications can deliver medical supplies by drone, disinfect patient rooms and scan approved drug databases (for other illnesses) that might also work against COVID-19. AI technologies have been harnessed to come up with new molecules that could serve as potential medications or even accelerate the time taken to predict the virus’s RNA secondary structure. A series of risk assessment algorithmsfor COVID-19 for use in healthcare settings have been developed, including an algorithm for the main actions that need to be followed for managing contacts of probable or confirmed COVID-19 cases, as developed by the European Centre for Disease Prevention and Control.

Certain AI applications can also detect fake news about the disease by applying machine-learning techniques for mining social media information, tracking down words that are sensational or alarming, and identifying which online sources are deemed authoritative for fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have partnered with the WHO to review and expose false information about COVID-19.

In public health emergency response management, derogating from an individual’s rights of privacy, nondiscrimination and freedom of movement in the name of the urgency of the situation can sometimes take the form of restrictive measures that include domestic containment strategies without due process, or medical examination without informed consent. In the case of COVID-19, AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations. Chinese search giant Baidu has developed a system using infrared and facial recognition technology that scans and takes photographs of more than 200 people per minute at the Qinghe railway station in Beijing. In Moscow, authorities are using automated facial recognition technology to scan surveillance camera footage in an attempt to identify recent arrivals from China, placed under quarantine for fear of COVID-19 infection and not expected to enter the station. Finally, Chinese authorities are deploying dronesto patrol public places, conduct thermal imaging, or to track people violating quarantine rules.

 

You can find original document from the link below:

Click to access EPRS_ATA(2020)641538_EN.pdf

Personal Robots and Personal Data

 

Personal Robots and Personal Data

 

 

Gizem Gültekin Várkonyi

Assistant Coordinator, PhD Fellow

University of Szeged

Faculty of Law and Political Sciences

 

 

Abstract

In this blog post, I argue whether, and at what level, it is possible to exercise right to personal data protection in the era of Social Robots with Artificial Intelligence (hereafter, Social Robot). I analyze the concept of consent that was strengthened in European Union’s General Data Protection Regulation (GDPR). I basically reach to such a conclusion that, a Social Robot at personal usage challenges practicability of the GDPR. This conclusion derives from, first, Social Robot’s ability to collect vast amount of data naturally, e.g. via natural Human-Robot Interaction, or when it connects to Internet. Since a personal Social Robot’s life source, its blood, is personal data, it would be absurd for a user to not to give consent to get more personal services. In addition, it is well-known that most of the users do not read/listen consent texts, or do not understand even if they do so. Moreover, it is not easy to answer to the question of whether consent could be validly given for purposes that even the developer is not able to foresee (Unpredictable by Design). Finally, even if consent was validly given, it is not possible to make Social Robot to “forget” about the personal data in subject, in case when a particular personal data became an organic part of robot’s Neural Network. Otherwise, how consent could be withdrawn from a Social Robot should also be questioned.

 

For more details about author:

https://robotic.legal/author/ggvarkonyi/

Artificial intelligence: Anticipating Its Impact On Jobs To Ensure A Fair Transition

 

Artificial intelligence: Anticipating Its Impact On Jobs To Ensure A Fair Transition 

avrupa birliği ile ilgili görsel sonucu

European Economic and Social Committee

Rapporteur: Franca SALIS-MADINIER

19/09/2018

 

1.Conclusions and recommendations

1.1.Artificial intelligence (AI) and robotics will expand and amplify the impact of the digitalisation of the economy on labour markets. Technological progress has always affected work and employment, requiring new forms of social and societal management. The EESC believes that technological development can contribute to economic and social progress; however, it feels that it would be a mistake to overlook its overall impact on society. In the world of work, AI will expand and amplify the scope of job automation. This is why the EESC would like to give its input to efforts to lay the groundwork for the social transformations which will go hand in hand with the rise of AI and robotics, by reinforcing and renewing the European social model.

1.2.The EESC flags up the potential of AI and its applications, particularly in the areas of healthcare, security in the transport and energy sectors, combating climate change and anticipating threats in the field of cybersecurity. The European Union, governments and civil society organisations have a key role to play when it comes to fully tapping the potential advantages of AI, particularly for people with disabilities or reduced mobility, the elderly and people with chronic health issues.

1.3.However, the EU has insufficient data on the digital economy and the resulting social transformation. The EESC recommends improving statistical tools and research, particularly on AI, the use of industrial and service robots, the Internet of Things and new economic models (the platform-based economy and new forms of employment and work).

1.4.The EESC calls on the European Commission to promote and support studies carried out by European sector-level social dialogue committees on the sector-specific impact of AI and robotics and, more broadly, of the digitalisation of the economy.

1.5.It is acknowledged that AI and robotics will displace and transform jobs, by eliminating some and creating others. Whatever the outcome, the EU must guarantee access to social protection for all workers, employees and self-employed or bogus self-employed persons, in line with the European Pillar of Social Rights.

 

You can find the link of the full document below:

https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence-anticipating-its-impact-jobs-ensure-fair-transition-own-initiative-opinion

Artificial Intelligence

 

İlgili resim

OPINION
 
European Economic and Social Committee
 
Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society

 31/05/2017

 

  1. Artificial intelligence (AI) is currently undergoing a number of important developments and is rapidly being applied in society. The AI market amounts to around USD 664 million and is expected to grow to USD 38.8 billion by 2025. As AI can have both a positive and a negative impact on society, the EESC has undertaken to closely monitor developments surrounding AI, not only from a technical perspective but also specifically from an ethical, safety and societal perspective.
  2. As the representative of European civil society, the EESC will shape, focus and promote the public debate on AI in the coming period, involving all relevant stakeholders: policy-makers, industry, the social partners, consumers, NGOs, educational and care institutions, and experts and academics from various disciplines (including AI, safety, ethics, economics, occupational science, law, behavioural science, psychology and philosophy).
  3. Although important, the discussion on superintelligence is currently predominating and this is overshadowing the debate on the impact of the current applications of AI. Therefore, the task and objective of this process will, among other things, be to enhance and broaden knowledge of AI and thereby feed into an informed and balanced debate free of worst-case scenarios and extreme relativism. In this connection, the EESC will undertake to promote the development of AI for the benefit of humanity. Nevertheless, an important task and objective of this process is also to recognise, identify and monitor disruptive developments in and around the development of AI, in order to be able to address them adequately and in good time. This will lead to increased social involvement, trust and support with respect to the further sustainable development and use of AI. (…)

 

You can reach original document from the link below:

https://www.eesc.europa.eu/en/our-work/opinions-information-reports/opinions/artificial-intelligence

Artificial Intelligence, Data Security and GDPR

 

Artificial Intelligence, Data Security and GDPR

 

 

Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own.

In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks. The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, sf.5)

GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals. GDPR defines profiling in Article 4 as follows: “Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” (WP251, sf.6)Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight.

Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data. For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score).

There are potentially three ways in which profiling may be used:

(i) general profiling;

(ii) decision-making based on profiling; and

(iii) solely automated decision-making, including profiling (Article 22).

The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, sf.8)

The important questions to be encountered here are:

-How does the algorithm access this data?

-Is the source of data correct?

-Does the decision of the algorithm cause legal effects on the person?

-Can the individuals have some rights over the decision based on automated process?

-What measures should the data controllers take in this case?

Nowadays, most companies are able to analyze their customers’ behaviors by collecting their datas. For example, an insurance company can determine insurance premiums by tracking driving behavior of the driver, through automatic decision making.In addition, profiling and automatic decision-making systems, especially in advertising and marketing applications, can cause effective results on other individuals. Hypothetically, without relying on his/her own payment history, a credit card company can reduce a customer’s card limit by analyzing other customers who live in the same area and who shop at the same store. Therefore, this means that, based on the actions of others, you are deprived of a chance.

Data controller will be held responsible because of mistakes

It is important to note that, mistakes or prejudices in the collected or shared data may lead to evaluations based on incorrect classifications and uncertain outcomes in the automated-decision making process and may have negative effects on the individual. Decisions can be based on out-of-date data or outsourced data can be misinterpreted by the system. If the data used for automated-decision making is not correct, then the resulting decision or profiling will not be correct.

In the face of possible mistakes that may arise in such systems where artificial intelligence and machine learning are used, some obligations will occur for “data controllers”.Data controller should take adequate measures to ensure that the data used or indirectly obtained are accurate and up-to-date.

In addition, the data controller should also take steps to ensure long-term data retention, as data retention times may be incompatible with accuracy and update, as well as with proportionality. Another important issue is that the special categories of personal data are processed and used in these systems.GDPR is seeking the explicit consent in the processing of special categories of personal data.However, in this case, the data controller should remember that the profiling can create special categories of personal data by combining non-special categories of non-personal data with non-special categories of personal data. For example, when a person’s health status can be obtained from food purchase records, food quality and energy content data. (WP251, sf.22)

GDPR also mentions that people, who are affected by automated-decision making with the data used, have certain rights on this situation. Given the transparency principle underlying GDPR, according to Articles 13 and 14, the data controller should clearly explain how the process of profiling or automated-decision making works for the individual.

Profiling may include an estimation component that increases the risk of mistakes. Input data may be incorrect or irrelevant or may be out of context. Individuals may want to query the validity of the data and grouping used. At this point, according to Article 16, the data subject will also have the right of correction.

Similarly, the right to delete in Article 17 may be claimed by the data subject in this context. If there is a given consent to the basis of the profiling and the consent is subsequently withdrawn, the data controller has to delete the personal data of the data subject as long as there is no other legal basis for profiling.

Importance of personal data of children’s

Another point that needs attention about profiling and automated-decision making is, the use of children’s personal data. Children can be more sensitive and more easily affected, especially in online media. For example, regarding the online games, profiling can be used to target players, who are more likely to spend money in their game, as well as to be offered more personalized advertising. GDPR does not distinguish whether the processing is related to children and adults in Article 22. Nevertheless, as children can be easily affected by such marketing efforts, the data controller must be sure to take appropriate measures for children and ensure that they are effective in protecting children’s rights, freedoms and legitimate interests.

As a result, profiling and automated-decision making based on systems such as artificial intelligence and machine learning can have important consequences for the individual. Data collected in connection with this technology, must be collected by the consent of the persons or must be set on a legal ground. Also, it is important to subsequently use these data in connection with the purpose for which they are collected. If the system suddenly starts to make unusual decisions, the data controller should take the necessary precautions and guard the rights and the freedoms of the persons involved, including what roadmaps to follow.

See also:

Yapay Zekâ, Veri Güvenliği ve GDPR