SAF: Stakeholders’ Agreement on Fairness in the Practice of Machine Learning Development
View/Open
Other authors
Publication date
2023ISSN
1471-5546
Abstract
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.
Document Type
Article
Document version
Published version
Language
English
Subject (CDU)
070 - Newspapers. The Press. Journalism
17 - Moral philosophy. Ethics. Practical philosophy
Keywords
Bias
Artificial Intelligence
Trustworthy AI
Fairness
Discrimination
Pro-Ethical Design
Intel·ligència artificial
Confiança
Imparcialitat
Discriminació
Ètica
Pages
p.19
Publisher
Springer
Is part of
Science and Engineering Ethics 2023, 29
This item appears in the following Collection(s)
Rights
© L'autor/a
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/