Result: A DSL for Testing LLMs for Fairness and Bias

Title:
A DSL for Testing LLMs for Fairness and Bias
Source:
urn:isbn:9798400705045 ; Proceedings - MODELS 2024: ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (2024-09-22); Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems, Linz, Aut [Aut], 22-09-2024 => 27-09-2024
Publisher Information:
Association for Computing Machinery, Inc
Publication Year:
2024
Collection:
University of Luxembourg: ORBilu - Open Repository and Bibliography
Document Type:
Conference conference object<br />report
Language:
English
ISBN:
979-84-00-70504-5
Relation:
FNR16544475 - Better Smart Software Faster (Besser) - An Intelligent Low-code Infrastructure For Smart Software, 2020 (01/01/2022-.) - Jordi Cabot; https://orbilu.uni.lu/handle/10993/62662; info:hdl:10993/62662; https://orbilu.uni.lu/bitstream/10993/62662/1/A_DSL_for_Testing_LLMs_for_Fairness_and_Bias___MODELS__24.pdf; wos:001322650200015
DOI:
10.1145/3640310.3674093
Rights:
open access ; http://purl.org/coar/access_right/c_abf2 ; info:eu-repo/semantics/openAccess
Accession Number:
edsbas.C8D3E166
Database:
BASE

Further information

peer reviewed ; Large language models (LLMs) are increasingly integrated into software systems to enhance them with generative AI capabilities. But LLMs may reflect a biased behavior, resulting in systems that could discriminate against gender, age or ethnicity, among other ethical concerns. Society and upcoming regulations will force companies and development teams to ensure their AI-enhanced software is ethically fair. To facilitate such ethical assessment, we propose LangBiTe, a model-driven solution to specify ethical requirements, and customize and automate the testing of ethical biases in LLMs. The evaluation can raise awareness on the biases of the LLM-based components of the system and/or trigger a change in the LLM of choice based on the requirements of that particular application. The model-driven approach makes both the requirements specification and the test generation platform-independent, and provides end-to-end traceability between the requirements and their assessment. We have implemented an open-source tool set, available on GitHub, to support the application of our approach.