AI Governance in the Public Sector

 

AI Governance in the Public Sector:

Three tales from the frontiers of automated decision-making in democratic settings

 

Author links open overlay pane

 

Maciej Kuziemski, Berkman Klein Center for Internet and Society, Harvard University

GianlucaMisuraca, European Commission, Joint Research Centre, Digital Economy Unit

April 2020

 

 

 

Abstract

“The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency – or in other words – to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector. In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine – trust in governance systems and democracy.”

 

You can find original paper from the link below:

https://ec.europa.eu/jrc/en/publication/ai-governance-public-sector-three-tales-frontiers-automated-decision-making-democratic-settings

What If We Could Fight Coronavirus with Artificial Intelligence?

 

What If We Could Fight Coronavirus with Artificial Intelligence?

 

European Parliamentary Research Service

2020

 

Analytics have changed the way disease outbreaks are tracked and managed, thereby saving lives. The international community is currently focused on the 2019-2020 novel coronavirus (COVID-19) outbreak, first identified in Wuhan, China. As it spreads, raising fears of a worldwide pandemic, international organisations and scientists are using artificial intelligence (AI) to track the epidemic in real-time, to effectively predict where the virus might appear next and develop an effective response.

On 31 December 2019, the World Health Organization (WHO) received the first report of a suspected novel coronavirus (COVID-19) in Wuhan. Amid concerns that the global response is fractured and uncoordinated, the WHO declared the outbreak a public health emergency of international concern (PHEIC) under the International Health Regulations (IHR) on 30 January 2020. Warnings about the novel coronavirus spreading beyond China were raised by AI systems more than a week before official information about the epidemic was released by international organisations. A health monitoring start-up correctly predicted the spread of COVID-19, using natural-language processing and machine learning. Decisions during such an outbreak need to be made on an urgent basis, often in the context of scientific uncertainty, fear, distrust, and social and institutional disruption. How can AI technologies be used to manage this type of global health emergency, without undermining protection of fundamental values and human rights?

Potential impacts and developments

In the case of COVID-19, AI has been used mostly to help detect whether people have novel coronavirus through the detection of visual signs of COVID-19 on images from lung CT scans; to monitor, in real time, changes in body temperature through the use of wearable sensors; and to provide an open-source data platform to track the spread of the disease. AI could process vast amounts of unstructured text data to predict the number of potential new cases by area and which types of populations will be most at risk, as well as evaluate and optimise strategies for controlling the spread of the epidemic. Other AI applications can deliver medical supplies by drone, disinfect patient rooms and scan approved drug databases (for other illnesses) that might also work against COVID-19. AI technologies have been harnessed to come up with new molecules that could serve as potential medications or even accelerate the time taken to predict the virus’s RNA secondary structure. A series of risk assessment algorithmsfor COVID-19 for use in healthcare settings have been developed, including an algorithm for the main actions that need to be followed for managing contacts of probable or confirmed COVID-19 cases, as developed by the European Centre for Disease Prevention and Control.

Certain AI applications can also detect fake news about the disease by applying machine-learning techniques for mining social media information, tracking down words that are sensational or alarming, and identifying which online sources are deemed authoritative for fighting what has been called an infodemic. Facebook, Google, Twitter and TikTok have partnered with the WHO to review and expose false information about COVID-19.

In public health emergency response management, derogating from an individual’s rights of privacy, nondiscrimination and freedom of movement in the name of the urgency of the situation can sometimes take the form of restrictive measures that include domestic containment strategies without due process, or medical examination without informed consent. In the case of COVID-19, AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations. Chinese search giant Baidu has developed a system using infrared and facial recognition technology that scans and takes photographs of more than 200 people per minute at the Qinghe railway station in Beijing. In Moscow, authorities are using automated facial recognition technology to scan surveillance camera footage in an attempt to identify recent arrivals from China, placed under quarantine for fear of COVID-19 infection and not expected to enter the station. Finally, Chinese authorities are deploying dronesto patrol public places, conduct thermal imaging, or to track people violating quarantine rules.

 

You can find original document from the link below:

Click to access EPRS_ATA(2020)641538_EN.pdf

AI and control of Covid-19

 

AI and control of Covid-19

 

the Ad hoc Committee on Artificial Intelligence (CAHAI) secretariat

2020

 

Artificial intelligence (AI) is being used as a tool to support the fight against the viral pandemic that has affected the entire world since the beginning of 2020. The press and the scientific community are echoing the high hopes that data science and AI can be used to confront the coronavirus (D. Yakobovitch, How to fight the Coronavirus with AI and Data Science, Medium, 15 February 2020) and “fill in the blanks” still left by science (G. Ratnam, Can AI Fill in the Blanks About Coronavirus? Think So Experts, Government Technology, 17 March 2020).

China, the first epicentre of this disease and renowned for its technological advance in this field, has tried to use this to its real advantage. Its uses seem to have included support for measures restricting the movement of populations, forecasting the evolution of disease outbreaks and research for the development of a vaccine or treatment. With regard to the latter aspect, AI has been used to speed up genome sequencing, make faster diagnoses, carry out scanner analyses or, more occasionally, handle maintenance and delivery robots (A. Chun, In a time of coronavirus, China’s investment in AI is paying off in a big way, South China Morning post, 18 March 2020). 

Its contributions, which are also undeniable in terms of organising better access to scientific publications or supporting research, does not eliminate the need for clinical test phases nor does it replace human expertise entirely. The structural issues encountered by health infrastructures in this crisis situation are not due to technological solutions but to the organisation of health services, which should be able to prevent such situations occurring (Article 11 of the European Social Charter). Emergency measures using technological solutions, including AI, should also be assessed at the end of the crisis. Those that infringe on individual freedoms should not be trivialised on the pretext of a better protection of the population. The provisions of Convention 108+ should in particular continue to be applied.

The contribution of artificial intelligence to the search for a cure

The first application of AI expected in the face of a health crisis is certainly the assistance to researchers to find a vaccine able to protect caregivers and contain the pandemic. Biomedicine and research rely on a large number of techniques, among which the various applications of computer science and statistics have already been making a contribution for a long time. The use of AI is therefore part of this continuity.

The predictions of the virus structure generated by AI have already saved scientists months of experimentation. AI seems to have provided significant support in this sense, even if it is limited due to so-called “continuous” rules and infinite combinatorics for the study of protein folding. The American start-up Moderna has distinguished itself by its mastery of a biotechnology based on messenger ribonucleic acid (mRNA) for which the study of protein folding is essential. It has managed to significantly reduce the time required to develop a prototype vaccine testable on humans thanks to the support of bioinformatics, of which AI is an integral part. 

Similarly, Chinese technology giant Baidu, in partnership with Oregon State University and the University of Rochester, published its Linearfold prediction algorithm in February 2020 to study the same protein folding. This algorithm is much faster than traditional algorithms in predicting the structure of a virus’ secondary ribonucleic acid (RNA) and provides scientists with additional information on how viruses spread. The prediction of the secondary structure of the RNA sequence of Covid-19 would thus have been calculated by Linearfold in 27 seconds instead of 55 minutes (Baidu, How Baidu is bringing AI to the fight against coronavirus, MIT Technology Review, 11 March 2020). DeepMind, a subsidiary of Google’s parent company, Alphabet, has also shared its predictions of coronavirus protein structures with its AlphaFold AI system (J. Jumper, K. Tunyasuvunakool, P. Kohli, D. Hassabis et al, Computational predictions of protein structures associated with COVID-19, DeepMind, 5 March 2020). IBM, Amazon, Google and Microsoft have also provided the computing power of their servers to the US authorities to process very large datasets in epidemiology, bioinformatics and molecular modelling (F. Lardinois, IBM, Amazon, Google and Microsoft partner with White House to provide compute resources for COVID-19 research, Techcrunch, 22 March 2020).

https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus

Artificial Intelligence: A European Perspective

 

Artificial Intelligence: A European Perspective

EU Commission, 2018

 

Abstract

We are only at the beginning of a rapid period of transformation of our economy and society due to the convergence of many digital technologies. Artificial Intelligence (AI) is central to this change and offers major opportunities to improve our lives. The recent developments in AI are the result of increased processing power, improvements in algorithms and the exponential growth in the volume and variety of digital data. Many applications of AI have started entering into our every-day lives, from machine translations, to image recognition, and music generation, and are increasingly deployed in industry, government, and commerce. Connected and autonomous vehicles, and AI-supported medical diagnostics are areas of application that will soon be commonplace. There is strong global competition on AI among the US, China, and Europe. The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values. The EU Member States and the European Commission are developing coordinated national and European strategies, recognising that only together we can succeed. We can build on our areas of strength including excellent research, leadership in some industrial sectors like automotive and robotics, a solid legal and regulatory framework, and very rich cultural diversity also at regional and sub-regional levels. 

It is generally recognised that AI can flourish only if supported by a robust computing infrastructure and good quality data: 

  • With respect to computing, we identified a window of opportunity for Europe to invest in the emerging new paradigm of computing distributed towards the edges of the network, in addition to centralised facilities. This will support also the future deployment of 5G and the Internet of Things. 
  • With respect to data, we argue in favour of learning from successful Internet companies, opening access to data and developing interactivity with the users rather than just broadcasting data. In this way, we can develop ecosystems of public administrations, firms, and civil society enriching the data to make it fit for AI applications responding to European needs. 

We should embrace the opportunities afforded by AI but not uncritically. The black box characteristics of most leading AI techniques make them opaque even to specialists. AI systems are currently limited to narrow and well-defined tasks, and their technologies inherit imperfections from their human creators, such as the well-recognised bias effect present in data. We should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in this disruptive technology, but we also need a broader engagement of civil society on the values to be embedded in AI and the directions for future development. This social engagement should be part of the effort to strengthen our resilience at all levels from local, to national and European, across institutions, industry and civil society. Developing local ecosystems of skills, computing, data, and applications can foster the engagement of local communities, respond to their needs, harness local creativity and knowledge, and build a human-centred, diverse, and socially driven AI. We still know very little about how AI will impact the way we think, make decisions, relate to each other, and how it will affect our jobs. This uncertainty can be a source of concern but is also a sign of opportunity. The future is not yet written. We can shape it based on our collective vision of what future we would like to have. But we need to act together and act fast.

 

You can reach the report from the link below:

https://ec.europa.eu/jrc/en/publication/artificial-intelligence-european-perspective

AI Now Report 2018

AI Now Report 2018

 

December 2018

 

RECOMMENDATIONS

 

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation. We need a sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specific approaches include the United States Federal Aviation Administration and the National Highway Traffic Safety Administration.
  1. Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.
  1. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their AI infrastructures can be understood from “nose to tail,” including their ultimate application and use.
  1. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements.
  1. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest.
  1. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientific evidence to back these promises is inadequate and the longer-term consequences are unknown.
  1. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI field as a whole have focused on the “pipeline model,” looking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in their workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with transparency measures about hiring and retention.
  1. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.
  1. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploitation, and community-based infrastructures that enable public participation.
  1. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

You can find  the full report from the link below:

https://ainowinstitute.org/AI_Now_2018_Report.pdf