Resist
ethics of Ssystems Intelligent of S
In times of societal crises (health, security, etc.), the need for security among individuals and societies tends to grow. Intelligent video surveillance is one of the emerging technologies that can help make territories more agile in the face of abnormal events. This technology deploys machine-learning and deep-learning algorithms to automate complex tasks such as person detection, action recognition, individual tracking and so on. Although these systems have demonstrated high capabilities in solving large-scale problems, their democratization is still limited.
The "Robustness" technological issue
AI systems, and in particular deep learning systems, have recently been shown to be vulnerable to a type of attack known as "Example Adverses". These attacks can be implemented via a "patch" printed and worn by the attacker which results in a total alteration of the target smart system, compromising its integrity and efficiency. For automatic surveillance systems, smart cameras are particularly vulnerable to these attacks. Given the criticality of these systems, ensuring their robustness is a major technological challenge. Existing defenses remain insufficient, especially in a real-world context, and no robustness techniques consider the multi-view component.
The philosophical-legal problem of "Ethics
Deploying AI-based systems to secure public spaces can create an ethical dilemma that calls for legal solutions. Indeed, these systems can recognize people, recognize actions, carry out targeted tracking, and so on. The challenge is to imagine ways of combining the advances in security enabled by this technology with the guarantee of fundamental freedoms, starting with respect for each individual's personal data and the principle of non-discrimination and the right of every citizen to participate freely in the democratic life that takes place on the public highway (freedom of demonstration).
Responsible, trusted AI
Oriented by the challenge of achieving a responsible and trusted AI, the RESIST project proposes to investigate the robustness and ethics of AI-based video surveillance systems by bringing together researchers in hard sciences and social sciences, from a polytechnic perspective.
The social sciences component: the "Ethics" program, to consider intelligent video surveillance in particular, and digital ethics in general.
The introduction of a new technology is never neutral. An investigation will be carried out into the risks of bias that intelligent systems can induce, as well as the risks of these systems on the privacy of individuals, on their right not to be discriminated against, and on their impact the informational self-determination of individuals. This study, combining technological, legal and ethical issues, will lead to methodological, regulatory and technical recommendations for the development of responsible AI. Several questions will be raised in ethical and legal terms: what societal abuses is this type of biometric surveillance likely to generate? Does intelligent video surveillance not entail major legal risks? What guarantee is there, for example, that intelligent systems are not biased and discriminatory (especially against visible minorities)? Moreover, in what sense are they compatible with privacy? Shouldn't it be compulsory to obtain the consent of individuals before filming them and using their data? Should we accept the widespread use of this technology and/or authorize it only on an exceptional basis, if not ban it? If it is authorized, how can we legally protect the companies that use it? Similarly, what means of recourse should be given to those who are "video-surveilled"?
What are the legal requirements for intelligent video surveillance?
At present, what national, European and international legal provisions provide the framework for intelligent video surveillance? Are they effective and sufficient to guarantee both respect for fundamental rights and public order? The aim is to identify both the legal standards that govern intelligent video surveillance and their shortcomings. Once these areas have been identified, the necessary improvements can be envisaged.
.What legal recommendations could be made to ensure that intelligent video surveillance falls more within the framework in the philosophy of responsible AI? Shouldn't citizens systematically have their say upstream of the authorization to market this technological innovation? In a word, what values, what standards, what institutions, what procedures and what control mechanisms must be invented to articulate the technological-economic performance of intelligent video surveillance with business ethics and democracy?
Above all, it would be a question of identifying the precise places in these technologies where the intervention of the law should be that of constraint and those where, on the contrary, the framing of actors by ethical commitment might be sufficient. In this way, the project can serve as a basis for considering the more global question of the limits of ethics as a means of framing new technologies. Not to exclude it, but to give it its rightful place.
This "Ethics" part of the program will be steered by Matthieu Caron, lecturer in public law at the Université Polytechnique des Hauts-de-France and managing director of L'Observatoire de l'éthique publique. It will result in the production of two books, as well as the publication of a white paper of proposals for perfecting digital ethics in France.
Read also
NVMD IF2RT
17.03.2022
AproTER
17.03.2022