Ethical issues surrounding artificial intelligence: the CNPEN has just issued an opinion.

 

freepik: bella786alex
freepik: bella786alex


After its  Manifesto for digital ethics published in 2021, and in continuation of its Opinion No. 3 on the ethical issues of conversational agents  ("chatbots") at the end of 2021, the National Pilot Committee for Digital Ethics (CNPEN) comes to issue its Opinion No. 7: Generative artificial intelligence (AI) systems: ethical issues.


Some recent notable dates concerning AI over the last 3 years

 


In April 2021,  a draft regulation of artificial intelligence  (AI) from the  European Commission was presented by European commissioners in Brussels.  This project combines a “  first legal framework on AI  ” and a “  new plan coordinated with member states  ”. In its press release, the European Commission announced that it wants to “  guarantee the security and fundamental rights of citizens and businesses while strengthening the adoption of AI, investments, and innovation across the EU  ".


In November 2021, the 193 Member States of  UNESCO  adopted the Recommendation on the Ethics of Artificial Intelligence, the first global normative instrument on the subject. It provides a common framework for all countries in the world. Nearly 30 of them have already established their national legislation based on this Recommendation, ensuring that artificial intelligence respects fundamental freedoms and human rights, and benefits all humanity.


November 2022, the massive arrival of the use of ChatGPT.


March 2023,  in an open letter,  Elon Musk and hundreds of experts asked to suspend for six months research in the development of artificial intelligence, particularly on artificial intelligence systems more powerful than GPT-4, the successor to the model on which ChatGPT is based.


In June 2023,  the European Parliament adopted its negotiating position on artificial intelligence legislation, ahead of discussions with member states on the final form of the legislation. A project to regulate artificial intelligence will take shape, the rules of which will aim to promote the adoption of AI focused on the human factor, and to protect health, safety, fundamental rights, and democracy from its negative effects.


In July 2023,  the UN Human Rights Council adopted a resolution calling on the international community to take protective and supervisory measures regarding artificial intelligence. The resolution calls for promoting “transparency” of AI systems and ensuring that data intended for this technology “is collected, used, shared, archived and deleted” in ways compatible with human rights.


July 2023, the  UN Security Council begins work on the “21st century revolution”, namely the challenges posed by artificial intelligence. On this occasion, Secretary-General Antonio Guterres warned of the risks of AI for global peace and security, calling for the urgent implementation of safeguards. Among the main and most immediate concerns is the development of autonomous weapons


July 2023, CNPEN Opinion No. 7: Generative artificial intelligence systems.

Established in 2019, the CNPEN, placed under the aegis of the National Consultative Ethics Committee (CCNE), has the role of issuing opinions on the referrals entrusted to it, informing the public debate on the issues of digital ethics, and drafting proposals relating to the perpetuation of national reflection on digital ethics. The first three referrals requested by the Prime Minister concern autonomous vehicles  (opinion published in May 2021),  conversational agents  (opinion published in November 2021), and medical diagnosis and artificial intelligence ( to be published soon ).


The deployment of ChatGPT illustrates the acceleration of the capabilities of generative intelligence systems. It is on this theme that the CNPEN has just issued its  Opinion No. 7: Generative Artificial Intelligence Systems: Ethical Issues  in which it points out:


The question of truth

As the notice specifies, “the results issued by the tool may be a source of error and lead to misinformation if the user does not exercise critical thinking. The generative AI system has no logic or understanding of the words it uses. Its method of learning by statistical numerical correlation, without the notion of meaning, can generate errors or “hallucinations” which raises the question of truth. However, the production of errors and the illusion of truth cannot be attributed to the machine, it is the designer of the program who must assume responsibility and warn the user of the risks involved, to avoid the risk of disinformation.”


Thus, to avoid technological and then societal tensions, the CNPEN invites designers of language models to demonstrate ethics from the design phase and requirements in taking biases into account.


Distinguish between man and machine

The Committee advocates maintaining the distinctions between human production and machine-generated results through the use of watermarked code. The introduction of watermarks is a necessary regulatory step for ethical reasons. They must meet two criteria. On the one hand, they must be robust enough to resist adversary attacks aimed at erasing them. On the other hand, their detection by verification software must not depend on the parameters of a particular AI system that would have generated the text.


 

Freepik :  andranik.h90


The question of language

The language used for learning generative AI systems is not trivial. Every human language necessarily conveys a history and a culture. The simple fact of handling language, which is the means of conscious thought and judgment, implicitly mobilizes cultural representations. However, we note a strong predominance of the corpus of resources in English. Thus, according to the CNPEN, when creating learning corpora for generative AI systems, designers must respect the diversity of human languages ​​and the cultures they convey.


The question of learning

The Committee recommends making the use of generative AI systems conditional on prior studies of their effect on the cognitive development of young brains. The Opinion recalls the societal challenge of preserving human learning which involves understanding concepts, reflection, and creativity, without resorting to machines. The CNPEN considers that the education system cannot and must not exclude generative AI, but must integrate it and regulate its use. This involves teaching children and students the underlying concepts to increase their understanding of the system and make it easier to use.


The need to mark limits through legal regulation

The opinion highlights the need to set limits via legal standards that are sufficiently flexible to deal with new developments and sufficiently structured to respond to respect for fundamental rights and the integrity of individuals. For them, legal responsibility must be assigned to the foundation model providers and application deployers. Moral responsibility extends to the designers of the foundation models and the developers of the generative AI systems using such models.


Data protection

For the CNPEN, it seems necessary for the European Data Protection Committee to produce guidelines relating to the articulation between the AI ​​regulation and the GDPR. As for the question of processing the data collected, the CNPEN recommends the development of legal rules supplemented by ethical questioning on the collection, storage, and reuse of linguistic traces of interactions between machines and human beings.


The ecological impact

For CNPEN, generative AI systems must be more aware and transparent about their energy use, their emissions, and the measures taken to mitigate them. To consider a virtuous development of these technologies, the CNPEN proposes the establishment of a metric for the environmental impact of generative AI systems


The governance

The Committee recommends the creation of a sovereign research and training entity “AI, science and society”.

Comments

Popular posts from this blog

Only one letter does not appear in the periodic table of elements: do you know which one?

The birth of the concept of brain-computer interface

Will the computer of the future be made without silicon?