New Rules For Artificial Intelligence
-Questions and Answers-
- Why do we need to regulate the use of Artificial Intelligence technology?
The potential benefits of AI for our societies are manifold from improved medical care to better education. Faced with the rapid technological development of AI, the EU must act as one to harness these opportunities. While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes. For example, the opacity of many algorithms may create uncertainty and hamper the effective enforcement of the existing legislation on safety and fundamental rights. Responding to these challenges, legislative action is needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed. This includes applications such as biometric identification systems or AI decisions touching on important personal interests, such as in the areas of recruitment, education, healthcare or law enforcement. The Commission’s proposal for a regulatory framework on AI aims to ensure the protection of fundamental rights and user safety, as well as trust in the development and uptake of AI.
- Which risks will the new AI rules address?
The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the specific characteristics of certain AI systems may create new risks related to user safety and fundamental rights. This leads to legal uncertainty for companies and potentially slower uptake of AI technologies by businesses and citizens, due to the lack of trust. Disparate regulatory responses by national authorities would risk fragmenting the internal market.
- What are the risk categories?
The Commission proposes a risk–based approach, with four levels of risk:
Unacceptable risk: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.
High-risk: A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk. Annexed to the proposal is the list of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases (future-proofing). These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation. In order to ensure trust and a consistent and high level of protection of safety and fundamental rights, mandatory requirements for all high-risk AI systems are proposed. Those requirements cover the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. In case of a breach, the requirements will allow national authorities to have access to the information needed to investigate whether the use of the AI system complied with the law. The proposed framework is consistent with the Charter of Fundamental Rights of the European Union and in line with the EU’s international trade commitments.
Limited risk: For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine.
Minimal risk: All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.
- What are the obligations for providers of high-risk AI systems?
Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy and robustness). In case the system itself or its purpose is substantially modified, the assessment will have to be repeated. For certain AI systems, an independent notified body will also have to be involved in this process. AI systems being safety components of products covered by sectorial Union legislation will always be deemed high-risk when subject to third-party conformity assessment under that sectorial legislation. Also for biometric identification systems a third party conformity assessment is always required.
Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements and minimise risks for users and affected persons, even after a product is placed on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of fundamental rights obligations of which they have become aware.
- How will compliance be enforced?
Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities. In order to increase efficiency and to set an official point of contact with the public and other counterparts, each Member State should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.
- What is the European Artificial Intelligence Board?
The European Artificial Intelligence Board would comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the Commission. Its role will be to facilitate a smooth, effective and harmonised implementation of the new AI Regulation. The Board will issue recommendations and opinions to the Commission regarding high-risk AI systems and on other aspects relevant for the effective and uniform implementation of the new rules. It will also help building up expertise and act as a competence centre that national authorities can consult. Finally, it will also support standardisation activities in the area.
- Will imports of AI systems and applications need to comply with the framework?
Yes. Importers of AI systems will have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure and has the technical documentation required by the Regulation. Additionally, importers should ensure that their system bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use.
- How is the Machinery Regulation related to AI?
Machinery regulation ensures that the new generation of machinery products guarantee the safety of users and consumers, and encourage innovation. Machinery products cover an extensive range of consumer and professional products, from robots (cleaning robots, personal care robots, collaborative robots, industrial robots) to lawnmowers, 3D printers, construction machines, industrial production lines.
- How does it fit with the regulatory framework on AI?
Both are complementary. The AI Regulation will address the safety risks of AI systems ensuring safety functions in machinery, while the Machinery Regulation will ensure, where applicable, the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole.
You can reach all questions and the original from the link below: