AI and control of Covid-19

 

AI and control of Covid-19

 

the Ad hoc Committee on Artificial Intelligence (CAHAI) secretariat

2020

 

Artificial intelligence (AI) is being used as a tool to support the fight against the viral pandemic that has affected the entire world since the beginning of 2020. The press and the scientific community are echoing the high hopes that data science and AI can be used to confront the coronavirus (D. Yakobovitch, How to fight the Coronavirus with AI and Data Science, Medium, 15 February 2020) and “fill in the blanks” still left by science (G. Ratnam, Can AI Fill in the Blanks About Coronavirus? Think So Experts, Government Technology, 17 March 2020).

China, the first epicentre of this disease and renowned for its technological advance in this field, has tried to use this to its real advantage. Its uses seem to have included support for measures restricting the movement of populations, forecasting the evolution of disease outbreaks and research for the development of a vaccine or treatment. With regard to the latter aspect, AI has been used to speed up genome sequencing, make faster diagnoses, carry out scanner analyses or, more occasionally, handle maintenance and delivery robots (A. Chun, In a time of coronavirus, China’s investment in AI is paying off in a big way, South China Morning post, 18 March 2020). 

Its contributions, which are also undeniable in terms of organising better access to scientific publications or supporting research, does not eliminate the need for clinical test phases nor does it replace human expertise entirely. The structural issues encountered by health infrastructures in this crisis situation are not due to technological solutions but to the organisation of health services, which should be able to prevent such situations occurring (Article 11 of the European Social Charter). Emergency measures using technological solutions, including AI, should also be assessed at the end of the crisis. Those that infringe on individual freedoms should not be trivialised on the pretext of a better protection of the population. The provisions of Convention 108+ should in particular continue to be applied.

The contribution of artificial intelligence to the search for a cure

The first application of AI expected in the face of a health crisis is certainly the assistance to researchers to find a vaccine able to protect caregivers and contain the pandemic. Biomedicine and research rely on a large number of techniques, among which the various applications of computer science and statistics have already been making a contribution for a long time. The use of AI is therefore part of this continuity.

The predictions of the virus structure generated by AI have already saved scientists months of experimentation. AI seems to have provided significant support in this sense, even if it is limited due to so-called “continuous” rules and infinite combinatorics for the study of protein folding. The American start-up Moderna has distinguished itself by its mastery of a biotechnology based on messenger ribonucleic acid (mRNA) for which the study of protein folding is essential. It has managed to significantly reduce the time required to develop a prototype vaccine testable on humans thanks to the support of bioinformatics, of which AI is an integral part. 

Similarly, Chinese technology giant Baidu, in partnership with Oregon State University and the University of Rochester, published its Linearfold prediction algorithm in February 2020 to study the same protein folding. This algorithm is much faster than traditional algorithms in predicting the structure of a virus’ secondary ribonucleic acid (RNA) and provides scientists with additional information on how viruses spread. The prediction of the secondary structure of the RNA sequence of Covid-19 would thus have been calculated by Linearfold in 27 seconds instead of 55 minutes (Baidu, How Baidu is bringing AI to the fight against coronavirus, MIT Technology Review, 11 March 2020). DeepMind, a subsidiary of Google’s parent company, Alphabet, has also shared its predictions of coronavirus protein structures with its AlphaFold AI system (J. Jumper, K. Tunyasuvunakool, P. Kohli, D. Hassabis et al, Computational predictions of protein structures associated with COVID-19, DeepMind, 5 March 2020). IBM, Amazon, Google and Microsoft have also provided the computing power of their servers to the US authorities to process very large datasets in epidemiology, bioinformatics and molecular modelling (F. Lardinois, IBM, Amazon, Google and Microsoft partner with White House to provide compute resources for COVID-19 research, Techcrunch, 22 March 2020).

https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus

Artificial Intelligence, Data Security and GDPR

 

Artificial Intelligence, Data Security and GDPR

 

 

Among the prominent features of artificial intelligence and machine learning, which are now being used in many sectors, is the ability to analyze data much faster from programmatic tools and from human beings as well as to learn how to manipulate data on its own.

In recent years, profiling and automated decision-making systems, which are frequently used in both public and private sectors, have brought benefits to individuals and corporations in terms of increased productivity and resource conservation, as well as bringing about risks. The decisions made by these systems which affect the individual and the complex nature of the decisions, cannot be justified. For example, artificial intelligence can lock a user into a specific category and restrict it to the suggested preferences. Hence, this reduces their freedom to choose specific products and services, such as books, music or news articles. (Article 29 Data Protection Working Party, WP251, sf.5)

GDPR, which will come into force in Europe in May, has provisions on profiling and automated decision making, to prevent from being used in such a way as to have an adverse effect on the rights of individuals. GDPR defines profiling in Article 4 as follows: “Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.” (WP251, sf.6)Profiling is used to make predictions about people, using the data obtained from various sources on those people. From this point of view, it can also be considered as an evaluation or classification of individuals based on characteristics such as age, gender, and weight.

Automated decision-making is the ability to make decisions with technological tools (such as artificial intelligence) without human intervention. Automated decision-making can be based on any type of data. For example, data provided directly by the individuals concerned (such as responses to a questionnaire); data observed about the individuals (such as location data collected via an application); derived or inferred data such as a profile of the individual that has already been created (e.g. a credit score).

There are potentially three ways in which profiling may be used:

(i) general profiling;

(ii) decision-making based on profiling; and

(iii) solely automated decision-making, including profiling (Article 22).

The difference between (ii) and (iii) is best demonstrated by the following two examples where an individual applies for a loan online: a human decides whether to agree the loan based on a profile produced by purely automated means(ii); an algorithm decides whether the loan is agreed and the decision is automatically delivered to the individual, without any meaningful human input (iii). (WP251, sf.8)

The important questions to be encountered here are:

-How does the algorithm access this data?

-Is the source of data correct?

-Does the decision of the algorithm cause legal effects on the person?

-Can the individuals have some rights over the decision based on automated process?

-What measures should the data controllers take in this case?

Nowadays, most companies are able to analyze their customers’ behaviors by collecting their datas. For example, an insurance company can determine insurance premiums by tracking driving behavior of the driver, through automatic decision making.In addition, profiling and automatic decision-making systems, especially in advertising and marketing applications, can cause effective results on other individuals. Hypothetically, without relying on his/her own payment history, a credit card company can reduce a customer’s card limit by analyzing other customers who live in the same area and who shop at the same store. Therefore, this means that, based on the actions of others, you are deprived of a chance.

Data controller will be held responsible because of mistakes

It is important to note that, mistakes or prejudices in the collected or shared data may lead to evaluations based on incorrect classifications and uncertain outcomes in the automated-decision making process and may have negative effects on the individual. Decisions can be based on out-of-date data or outsourced data can be misinterpreted by the system. If the data used for automated-decision making is not correct, then the resulting decision or profiling will not be correct.

In the face of possible mistakes that may arise in such systems where artificial intelligence and machine learning are used, some obligations will occur for “data controllers”.Data controller should take adequate measures to ensure that the data used or indirectly obtained are accurate and up-to-date.

In addition, the data controller should also take steps to ensure long-term data retention, as data retention times may be incompatible with accuracy and update, as well as with proportionality. Another important issue is that the special categories of personal data are processed and used in these systems.GDPR is seeking the explicit consent in the processing of special categories of personal data.However, in this case, the data controller should remember that the profiling can create special categories of personal data by combining non-special categories of non-personal data with non-special categories of personal data. For example, when a person’s health status can be obtained from food purchase records, food quality and energy content data. (WP251, sf.22)

GDPR also mentions that people, who are affected by automated-decision making with the data used, have certain rights on this situation. Given the transparency principle underlying GDPR, according to Articles 13 and 14, the data controller should clearly explain how the process of profiling or automated-decision making works for the individual.

Profiling may include an estimation component that increases the risk of mistakes. Input data may be incorrect or irrelevant or may be out of context. Individuals may want to query the validity of the data and grouping used. At this point, according to Article 16, the data subject will also have the right of correction.

Similarly, the right to delete in Article 17 may be claimed by the data subject in this context. If there is a given consent to the basis of the profiling and the consent is subsequently withdrawn, the data controller has to delete the personal data of the data subject as long as there is no other legal basis for profiling.

Importance of personal data of children’s

Another point that needs attention about profiling and automated-decision making is, the use of children’s personal data. Children can be more sensitive and more easily affected, especially in online media. For example, regarding the online games, profiling can be used to target players, who are more likely to spend money in their game, as well as to be offered more personalized advertising. GDPR does not distinguish whether the processing is related to children and adults in Article 22. Nevertheless, as children can be easily affected by such marketing efforts, the data controller must be sure to take appropriate measures for children and ensure that they are effective in protecting children’s rights, freedoms and legitimate interests.

As a result, profiling and automated-decision making based on systems such as artificial intelligence and machine learning can have important consequences for the individual. Data collected in connection with this technology, must be collected by the consent of the persons or must be set on a legal ground. Also, it is important to subsequently use these data in connection with the purpose for which they are collected. If the system suddenly starts to make unusual decisions, the data controller should take the necessary precautions and guard the rights and the freedoms of the persons involved, including what roadmaps to follow.

See also:

Yapay Zekâ, Veri Güvenliği ve GDPR